Showing posts with label games. Show all posts
Showing posts with label games. Show all posts
17 November 2008
Monopoly: electronic banking edition
There is, I just learned from a television commercial, a Monopoly Electronic Banking Edition.
The children of the future, it appears, will perhaps not be able to claim that Monopoly is educational because it requires them to do math. (Although I'll admit that I always found Monopoly frustrating because other people's mental math is slower than mine.)
Somewhat more seriously, are the children of the future (or, perhaps, the present) going to feel that they have less reason to learn mathematics because they don't deal with cash? That seems like it would be something that would motivate at least some students to learn basic arithmetic.
The children of the future, it appears, will perhaps not be able to claim that Monopoly is educational because it requires them to do math. (Although I'll admit that I always found Monopoly frustrating because other people's mental math is slower than mine.)
Somewhat more seriously, are the children of the future (or, perhaps, the present) going to feel that they have less reason to learn mathematics because they don't deal with cash? That seems like it would be something that would motivate at least some students to learn basic arithmetic.
13 November 2008
The geometry of a game
From 360: Shinju, a geometrical game. You're given an arrangement of shells in some of the squares of a square grid. One of the shells hides a pearl. Your goal is to find it, opening at most four shells. When you open a shell, it either contains the pearl, or a number. That number is the distance to the pearl, in the (T)chebyshev distance.
There's a nice little result hiding here:
Problem 1: Given any arrangement of shells, prove that it is possible to find the shell in four clicks.
Since you always get four clicks (at least as long as I played), the game becomes trivial if you can find a constructive proof of that fact. (If you're the sort of person that likes to rack up points in video games, though, I think you get more points if you don't use all your clicks -- so how do you set things up so that you're likely to solve the puzzle in less than four clicks?)
Problem 2: Generalize (to higher dimensions, different metrics, etc.)
There's a nice little result hiding here:
Problem 1: Given any arrangement of shells, prove that it is possible to find the shell in four clicks.
Since you always get four clicks (at least as long as I played), the game becomes trivial if you can find a constructive proof of that fact. (If you're the sort of person that likes to rack up points in video games, though, I think you get more points if you don't use all your clicks -- so how do you set things up so that you're likely to solve the puzzle in less than four clicks?)
Problem 2: Generalize (to higher dimensions, different metrics, etc.)
02 January 2008
Stop-motion Tetris
Stop motion tetris is made out of people! Some people got together and played the part of the squares in the game of Tetris; a lecture hall with seats in a grid was the background.
What surprised me is how, at least in this particular game of Tetris, each individual piece got broken up pretty quickly, with only one or two pieces out of the original four remaining on the board. (The four people who made up each dropping piece were wearing the same color T-shirt, and there were enough different colors that for the most part one could assume that people wearing the same color and next to each other were part of the same piece.) This isn't obvious if you're used to, say, the NES version of Tetris (which is the one I played the most); the color scheme is different there. If you stop to think about it for a moment, though, it makes perfect sense that this should happen; it's very rare that four squares which fall at the same time all get eliminated at the same time.
Incidentally, the people writing that post say "Now, we're not mathologists". What is a mathologist? I would argue that mathologists are to mathematicians as musicologists (i. e. people who study music in a scholarly fashion, as opposed to those who produce it) are to musicians. However, there don't seem to be many mathologists in this sense; it's quite difficult to get the sort of deep appreciation for mathematics that one needs without actually doing mathematics, or so it seems to me.
By the way, Tetris is also a traditional Russian board game involving vodka. (Not really. And the amounts of vodka that this description invokes would kill even a Russian.)
What surprised me is how, at least in this particular game of Tetris, each individual piece got broken up pretty quickly, with only one or two pieces out of the original four remaining on the board. (The four people who made up each dropping piece were wearing the same color T-shirt, and there were enough different colors that for the most part one could assume that people wearing the same color and next to each other were part of the same piece.) This isn't obvious if you're used to, say, the NES version of Tetris (which is the one I played the most); the color scheme is different there. If you stop to think about it for a moment, though, it makes perfect sense that this should happen; it's very rare that four squares which fall at the same time all get eliminated at the same time.
Incidentally, the people writing that post say "Now, we're not mathologists". What is a mathologist? I would argue that mathologists are to mathematicians as musicologists (i. e. people who study music in a scholarly fashion, as opposed to those who produce it) are to musicians. However, there don't seem to be many mathologists in this sense; it's quite difficult to get the sort of deep appreciation for mathematics that one needs without actually doing mathematics, or so it seems to me.
By the way, Tetris is also a traditional Russian board game involving vodka. (Not really. And the amounts of vodka that this description invokes would kill even a Russian.)
29 December 2007
Gravity pods
The game Gravity Pods is a very fun game, and it vividly illustrates the principle of sensitive dependence on initial conditions. You're asked to fire a projectile in an area which includes some gravitational attractors, in an attempt to hit a given target. (At later levels you can move some of the attractors around. At even later levels there are repulsive objects, a nice twist. There might be even more tricks later, but I haven't gotten that far.)
It was an even more vivid illustration of sensitivity before, because in earlier incarnations of the game there were levels that could not be beaten -- it would be possible to fire at an angle of 10 degrees from the horizontal, or 10.1 degrees (0.1 degrees is the finest control you get over the angle at which the projectile is fired), but 10.05 degrees (say) is what you really needed. Similarly, the movable attractors and repellers can only be placed to the nearest pixel. I don't think this problem exists any more, but there are still annoying (although perhaps fascinating as well) levels in which sub-pixel positioning abilities would be nice.
It was an even more vivid illustration of sensitivity before, because in earlier incarnations of the game there were levels that could not be beaten -- it would be possible to fire at an angle of 10 degrees from the horizontal, or 10.1 degrees (0.1 degrees is the finest control you get over the angle at which the projectile is fired), but 10.05 degrees (say) is what you really needed. Similarly, the movable attractors and repellers can only be placed to the nearest pixel. I don't think this problem exists any more, but there are still annoying (although perhaps fascinating as well) levels in which sub-pixel positioning abilities would be nice.
23 November 2007
strategy in Tammet's solitaire
This was written a few months ago; I'm just getting around to posting it now for some reason.
Recently I read Daniel Tammet's book Born on a Blue Day: Inside the Extraordinary Mind of an Autistic Savant. Tammet is a savant with Asperger's syndrome; he's very good as a mental calculator; he also is synaesthetic. In his particular case, he experiences numbers and words with certain colors, which explains the title of the book. He also sometimes sees numbers as shapes, to which he attributes his calculational ability; there's an interesting point where he explains that he sees multiplication of numbers by seeing the "shapes" that he associates them come together, and the space in between is the shape of the product. This seems to imply that each prime number would have to have some basic sort of shape.
At one point he mentions a solitaire game to be played with cards that he invented. The game works as follows:
Presumably the reason for not declaring the pile dead if the first card is prime is because the chances of the number on any given card being prime are just too high -- six out of the first thirteen natural numbers are prime! The first natural question to ask is the probability that that initial four-card pile is allowed to live. It wouldn't be hard to compute this exactly, but I figured I'd get a better feel for the game by playing with actual cards. A fifty-two card deck naturally breaks up into thirteen such piles; I went through a deck of cards ten times, seeing 130 piles, and got fifty-three piles which were allowed to live -- about 41% of them.
I then ran the following MAPLE code:
This loops through all the quadruplets (i, j, k, l) with each member between 1 and 13; 1 is added to the accumulator S for each quadruplet which corresponds to a good pile. There are 10026 possible "good" piles, it turns out, out of a total of 28,561; that's about 35%. Going through the deck ten times as I did, you'd expect 46 good piles; I got 53, which isn't that far off given the small sample size.
(Note that implicit here is the assumption that each quadruplet is equally likely; in fact, those which contain repetitions are less likely when you're dealing with an actual deck of cards.)
Incidentally, if 13 is replaced by some other number n, then the probability of a "good" quadruplet increases as n is increased. If n=2 it's 1/16 -- the only good quadruplet is (2, 2, 2, 2). If n = 10 we get 0.3074. If n = 20 there are 64575 good quadruplets out of 160000, and the probability of a good quadruplet is 0.4035. If n=52 (this corresponds to each card in the deck having its own number) the probability of a "good" quadruplet is up around 48%. I suspect the probability approaches one as n gets large, but this code runs quite slowly as n gets larger. There are faster ways to do the computation, but they take more coding.
Now, let's say we're dealt four cards, and their sum is k. Do we want a fifth? As Tammet points out, you want a fifth card if there aren't that many primes in the next thirteen numbers. My instinct is that you only want to "hit" (somehow the blackjack terminology seems natural here) if the sum has probability less than one-fifth of being prime -- that is, if there are two or less primes in the numbers from k+1 to k+13. k can be at most 52; it turns out that you should only hit on 44 or 45, in this strategy.
But this is actually the correct strategy for maximizing the number of cards you get on a single trick. (We're trading in having four cards with certainty to having five cards with at least a 4/5 probability.) The object of the game, as stated, is to maximize the number of cards you get going through the whole deck. with the naive strategy of always staying after the fourth card, we get to keep on average 35% of the cards.
So let's say I have a four-card pile going, with sum k. Considering those four cards and the next one, I expect to keep 4.35 of them if I stay; I expect to keep 5m/13 of them if I hit, where m is the number of composites between k+1 and k+13. So hitting is the right move exactly when 4.35> 5m/13, or m> 11.31; I only want to hit if the numbers from k+1 to k+13 contain zero or one primes. But k is at most 52. The sequence of primes begins
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67...
and so I should never hit. I wonder if Tammet is aware of this?
Obviously, though, not hitting makes the game less interesting...
Daniel Tammet has a website and a blog
Recently I read Daniel Tammet's book Born on a Blue Day: Inside the Extraordinary Mind of an Autistic Savant. Tammet is a savant with Asperger's syndrome; he's very good as a mental calculator; he also is synaesthetic. In his particular case, he experiences numbers and words with certain colors, which explains the title of the book. He also sometimes sees numbers as shapes, to which he attributes his calculational ability; there's an interesting point where he explains that he sees multiplication of numbers by seeing the "shapes" that he associates them come together, and the space in between is the shape of the product. This seems to imply that each prime number would have to have some basic sort of shape.
At one point he mentions a solitaire game to be played with cards that he invented. The game works as follows:
Some of the time I played a simple form of solitaire of my own creationo, with a deck of cards in which each card was given a numerical value: ace as 1, jack 11, queen 12, king 13, while the numbers on the other cards determined their values. the object of the game is to keep as many cards as possible from the deck. At the start, the deck is shuffled and then four cards ar played onto a pile. If, after the first card, the total value of the cards in the pile is at any point a prime number, then those cars are lost....
The player now decides whether to risk putting another card onto the pile or to start a new pile from scratch. If the player decides not to risk a new card on the pile, then the cards from the pile are safe and are retained. If the player plays a new card and the total reaches a prime number at any point, then the whole pile of cards is lost and a new pile is started. The game ends when all fifty-two cards in the deck have been played into piles, some lost and some successfully held. The player counts up the total number of safely retained cards to work out his final score.
Presumably the reason for not declaring the pile dead if the first card is prime is because the chances of the number on any given card being prime are just too high -- six out of the first thirteen natural numbers are prime! The first natural question to ask is the probability that that initial four-card pile is allowed to live. It wouldn't be hard to compute this exactly, but I figured I'd get a better feel for the game by playing with actual cards. A fifty-two card deck naturally breaks up into thirteen such piles; I went through a deck of cards ten times, seeing 130 piles, and got fifty-three piles which were allowed to live -- about 41% of them.
I then ran the following MAPLE code:
S := 0;
for i from 1 to 13 do
for j from 1 to 13 do
for k from 1 to 13 do
for l from 1 to 13 do
if not(isprime(i+j) or isprime(i+j+k) or isprime(i+j+k+l)) then S := S+1; fi;
od; od; od; od;
This loops through all the quadruplets (i, j, k, l) with each member between 1 and 13; 1 is added to the accumulator S for each quadruplet which corresponds to a good pile. There are 10026 possible "good" piles, it turns out, out of a total of 28,561; that's about 35%. Going through the deck ten times as I did, you'd expect 46 good piles; I got 53, which isn't that far off given the small sample size.
(Note that implicit here is the assumption that each quadruplet is equally likely; in fact, those which contain repetitions are less likely when you're dealing with an actual deck of cards.)
Incidentally, if 13 is replaced by some other number n, then the probability of a "good" quadruplet increases as n is increased. If n=2 it's 1/16 -- the only good quadruplet is (2, 2, 2, 2). If n = 10 we get 0.3074. If n = 20 there are 64575 good quadruplets out of 160000, and the probability of a good quadruplet is 0.4035. If n=52 (this corresponds to each card in the deck having its own number) the probability of a "good" quadruplet is up around 48%. I suspect the probability approaches one as n gets large, but this code runs quite slowly as n gets larger. There are faster ways to do the computation, but they take more coding.
Now, let's say we're dealt four cards, and their sum is k. Do we want a fifth? As Tammet points out, you want a fifth card if there aren't that many primes in the next thirteen numbers. My instinct is that you only want to "hit" (somehow the blackjack terminology seems natural here) if the sum has probability less than one-fifth of being prime -- that is, if there are two or less primes in the numbers from k+1 to k+13. k can be at most 52; it turns out that you should only hit on 44 or 45, in this strategy.
But this is actually the correct strategy for maximizing the number of cards you get on a single trick. (We're trading in having four cards with certainty to having five cards with at least a 4/5 probability.) The object of the game, as stated, is to maximize the number of cards you get going through the whole deck. with the naive strategy of always staying after the fourth card, we get to keep on average 35% of the cards.
So let's say I have a four-card pile going, with sum k. Considering those four cards and the next one, I expect to keep 4.35 of them if I stay; I expect to keep 5m/13 of them if I hit, where m is the number of composites between k+1 and k+13. So hitting is the right move exactly when 4.35> 5m/13, or m> 11.31; I only want to hit if the numbers from k+1 to k+13 contain zero or one primes. But k is at most 52. The sequence of primes begins
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67...
and so I should never hit. I wonder if Tammet is aware of this?
Obviously, though, not hitting makes the game less interesting...
Daniel Tammet has a website and a blog
09 November 2007
Monopoly strategy
How to Win at Monopoly, by Tim Darling.
This is basically a user-friendlization of some of the rather unwieldy tables Probabilities in the Game of Monopoly®, which uses a Markov-chain-based approach. If you can compute the probability that a given space gets landed on, then you know how often you'll collect rent for that space if you have it; this in turn lets you know which properties are good to buy, put houses on, and so on. (Incidentally, the best return on investment in the game seems to come from buying the third house on any given property.)
However, the assumption that underlies all this is that when playing Monopoly, you're trying to maximize your expected winnings. Unless you play for actual money (and maybe not even then), what you're actually trying to do is stay in the game longer than everybody else, which is different. In an actual game you might want to take bets that are more likely to pay out rather than bets that are risky but pay extraordinarily well when they do. This doesn't really apply to Monopoly the way I've stated it (all the squares have roughly equal probabilities of being landed on), but it seems to imply that if you want to play conservatively (say, because you're winning and you'd like to preserve that), you can do so by building up two sets of cheap properties than one set of expensive properties. Conversely, if you're behind you probably want to build on expensive properties and hope desperately that someone lands on them. Diversification is usually good, but not in those cases where the expected value is kind of crappy and only the best possible outcomes will keep you alive.
This is basically a user-friendlization of some of the rather unwieldy tables Probabilities in the Game of Monopoly®, which uses a Markov-chain-based approach. If you can compute the probability that a given space gets landed on, then you know how often you'll collect rent for that space if you have it; this in turn lets you know which properties are good to buy, put houses on, and so on. (Incidentally, the best return on investment in the game seems to come from buying the third house on any given property.)
However, the assumption that underlies all this is that when playing Monopoly, you're trying to maximize your expected winnings. Unless you play for actual money (and maybe not even then), what you're actually trying to do is stay in the game longer than everybody else, which is different. In an actual game you might want to take bets that are more likely to pay out rather than bets that are risky but pay extraordinarily well when they do. This doesn't really apply to Monopoly the way I've stated it (all the squares have roughly equal probabilities of being landed on), but it seems to imply that if you want to play conservatively (say, because you're winning and you'd like to preserve that), you can do so by building up two sets of cheap properties than one set of expensive properties. Conversely, if you're behind you probably want to build on expensive properties and hope desperately that someone lands on them. Diversification is usually good, but not in those cases where the expected value is kind of crappy and only the best possible outcomes will keep you alive.
21 October 2007
The game of "thermonuclear pennies"
The game of thermonuclear pennies. This is, not surprisingly, related to the game of nuclear pennies; the game of nuclear pennies is in turn related to the Seven trees in one theorem of Blass, which I wrote about back in July. (Very roughly speaking, the "number of trees" can be shown to be "the primitive sixth root of unity", but the seventh power of this number is itself! Objects of Categories as Complex Numbers, by Marcelo Fiore and Tom Leinster, legitimates this.1 The game of nuclear pennies works as follows; we start with a semi-infinite strip of squares, which have pennies on them. We can "split" a penny on site n into two pennies, one on site n-1 and one on site n+1; we can "fuse" pennies on sites n-1 and n+1 into one on site n. We can analyze positions in this game using arithmetic in the ring of integers with a sixth root of unity, ω, adjoined. Given a position with an coins in the nth position, associate the number
Σn an ωn
with this position; it turns out that two positions can be reached from each other if and only if their associated numbers are the same. (We have the relations ω3 = -1, ω - ω2 = 1.)
In thermonuclear pennies the splitting rule is a bit different; it turns out that "sixth root of unity" can be replaced with "fourth root of unity", which is the imaginary unit. Positions in this game represent Gaussian integers. And there's even a way to multiply the positions, which corresponds to multiplication of Gaussian integers.
Challenge (which may or may not be hard, I haven't thought about it all): develop such a penny-fusing game where the positions can be understood as elements of the ring of integers with a fifth root of unity adjoined. (I don't expect the result to be as nice; for example, Z[e2πi/5] isn't a lattice in the complex plane. I'm not sure why that's relevant, but it might be.)
1. Is "legitimate" a verb, meaning "to make legitimate"? (The stress in the verb would be on the last syllable.) I think it should be, if it's not already. I've heard "precise" used as a verb (I don't remember how it's pronounced), and "legitimate" strikes me as more phonetically akin to verbs than "precise" is, probably because of the -ate suffix.
Σn an ωn
with this position; it turns out that two positions can be reached from each other if and only if their associated numbers are the same. (We have the relations ω3 = -1, ω - ω2 = 1.)
In thermonuclear pennies the splitting rule is a bit different; it turns out that "sixth root of unity" can be replaced with "fourth root of unity", which is the imaginary unit. Positions in this game represent Gaussian integers. And there's even a way to multiply the positions, which corresponds to multiplication of Gaussian integers.
Challenge (which may or may not be hard, I haven't thought about it all): develop such a penny-fusing game where the positions can be understood as elements of the ring of integers with a fifth root of unity adjoined. (I don't expect the result to be as nice; for example, Z[e2πi/5] isn't a lattice in the complex plane. I'm not sure why that's relevant, but it might be.)
1. Is "legitimate" a verb, meaning "to make legitimate"? (The stress in the verb would be on the last syllable.) I think it should be, if it's not already. I've heard "precise" used as a verb (I don't remember how it's pronounced), and "legitimate" strikes me as more phonetically akin to verbs than "precise" is, probably because of the -ate suffix.
26 August 2007
how many Scrabble racks are there?
I've been playing a lot of Scrabble on Facebook lately. While Googling around for some stuff on Scrabble strategy (I'm not the type to memorize words, mostly because that crosses some sort of invisible line between "fun" and "work", but knowing about things like how to try to keep a good mix of vowels and consonants on my rack makes the game more interesting just because I'm less likely to get stuck with a bad rack, which is frustrating), I found a couple things of interest.
First, this handout from the MIT Scrabble club. (I actually knew the guy who started it. He lived down the hall from me my senior year, when he was a freshman.) On that handout they quote Joel Sherman, who says:
Indeed, the conventional wisdom is that people who are good at math do well at Scrabble, and that having a large vocabulary is not particularly useful. (I suspect a lot of this is because large vocabularies tend to consist of long words, and words much longer than seven letters are quite rare in Scrabble.) The same thing is said to be true of crossword puzzles, although there it's not as obvious why there should be a correlation, and my belief is that it's not as pronounced.)
I also found at Scrabble on the Brain a question: "How many games of Scrabble would you have to play before you see every possible rack?"
How to answer this question is not obvious, because the racks one gets in a game aren't independent. I'll answer the easier question of how many games of Scrabble you'd have to play before you see every possible rack on the opening play. This is an instance of the "coupon collector's problem", which says: if we pick a sequence of elements from {1, 2, ..., n} at random, with replacement, how long will it take until we've picked each of the n integers at least once? The standard version of the problem has one picking uniformly at random, so this models, for example, the number of rolls of a standard six-sided die that would be necessary to have each number come up at least once. The answer in this case is n(1 + 1/2 + 1/3 + ... + 1/n), which is asymptotically equal to n(log n + γ), where γ = 0.5772156649... is the Euler-Mascheroni constant. I suspect that in the case where the distribution is non-uniform, as it is for Scrabble racks, it takes longer to observe all possiblilities;
and so the important thing is to compute the number of possible Scrabble racks. (If this has been done before, I can't find it!)
If Scrabble tiles could be differentiated (so instead of having nine tiles labeled A, we had tiles labeled A1 through A9), then the answer would just be the number of ways of picking seven tiles from 100, which is C(100,7) = 16,007,560,800. But they're not. I can't see a nice way to compute the actual number (although that doesn't mean it's not there, and I'd appreciate knowing it!) But the following sampling procedure should yield an estimate. Let X be a random variable defined as follows: pick a rack R of seven Scrabble tiles uniformly at random from all 7-sets of (distinguished) Scrabble tiles, and let X(R) be the number of different 7-sets of distinguished Scrabble tiles that collapse to the same set of non-distinguished tiles. So, for example, if we picked the tiles EXAMPLE, there are 12 E's, 1 X, 9 A's, 2 M's, 2 P's, and 4 L's in the tile set, so the number of possible sets of distinguished tiles reading EXAMPLE is C(12,2) C(1,1) C(9,1) C(2,1) C(2,1) C(4,1) = 9504. Then the number of possible racks of undistinguished tiles is the sum of 1/X(R) over all possible racks of distinguished tiles, or C(100,7) times the expectation of 1/X(R).
(Compare if we were trying to find the number of possible one-letter racks, that is, letters; we'd have, say, 9 terms which are each 1/9 corresponding to the labeled tiles A1 through A9, 2 terms each 1/2 corresponding to B1 and B2, and so on. The terms corresponding to each tile type sum to 1, so the sum is just the number of tile types, or 27 including the blank.)
And the expectation of 1/X(R) can be found by sampling. I generated a million racks at random and evaluated 1/X(R) (with some hacked-together Maple code); the expectation of 1/X(R) calculated from this sample is 0.0002014090957, and I approximate the number of possible Scrabble racks to be this quantity times C(10,7), or 3224068. (I'm not interested enough to try to calculate the standard error on this -- and I hope I can get an exact answer.) The number of games of Scrabble you'd have to play before you see every possible rack on the opening play is probably at least 3224068 log 3224068, or around fifty million.
First, this handout from the MIT Scrabble club. (I actually knew the guy who started it. He lived down the hall from me my senior year, when he was a freshman.) On that handout they quote Joel Sherman, who says:
Accept the idea that Scrabble is a math game just as much as it is a word game. All the strategic theory of the game is based on statistical analysis, probabilities, spatial relationships on the board, maximizing the value of small-numbered tiles by playing bingos (using all 7 tiles on your rack to earn the 50-point bonus), and large-numbered# tiles by causing them to interact with the colored premium squares on the board, or with other words on the board. (I.e.: you can score just as much for placing a 4-letter word in a place where it lies parallel to 4 other letters or whole words without hitting any premium squares as you might for the same word hitting a double or even triple word square but forming only one or two words on the play.)
Indeed, the conventional wisdom is that people who are good at math do well at Scrabble, and that having a large vocabulary is not particularly useful. (I suspect a lot of this is because large vocabularies tend to consist of long words, and words much longer than seven letters are quite rare in Scrabble.) The same thing is said to be true of crossword puzzles, although there it's not as obvious why there should be a correlation, and my belief is that it's not as pronounced.)
I also found at Scrabble on the Brain a question: "How many games of Scrabble would you have to play before you see every possible rack?"
How to answer this question is not obvious, because the racks one gets in a game aren't independent. I'll answer the easier question of how many games of Scrabble you'd have to play before you see every possible rack on the opening play. This is an instance of the "coupon collector's problem", which says: if we pick a sequence of elements from {1, 2, ..., n} at random, with replacement, how long will it take until we've picked each of the n integers at least once? The standard version of the problem has one picking uniformly at random, so this models, for example, the number of rolls of a standard six-sided die that would be necessary to have each number come up at least once. The answer in this case is n(1 + 1/2 + 1/3 + ... + 1/n), which is asymptotically equal to n(log n + γ), where γ = 0.5772156649... is the Euler-Mascheroni constant. I suspect that in the case where the distribution is non-uniform, as it is for Scrabble racks, it takes longer to observe all possiblilities;
and so the important thing is to compute the number of possible Scrabble racks. (If this has been done before, I can't find it!)
If Scrabble tiles could be differentiated (so instead of having nine tiles labeled A, we had tiles labeled A1 through A9), then the answer would just be the number of ways of picking seven tiles from 100, which is C(100,7) = 16,007,560,800. But they're not. I can't see a nice way to compute the actual number (although that doesn't mean it's not there, and I'd appreciate knowing it!) But the following sampling procedure should yield an estimate. Let X be a random variable defined as follows: pick a rack R of seven Scrabble tiles uniformly at random from all 7-sets of (distinguished) Scrabble tiles, and let X(R) be the number of different 7-sets of distinguished Scrabble tiles that collapse to the same set of non-distinguished tiles. So, for example, if we picked the tiles EXAMPLE, there are 12 E's, 1 X, 9 A's, 2 M's, 2 P's, and 4 L's in the tile set, so the number of possible sets of distinguished tiles reading EXAMPLE is C(12,2) C(1,1) C(9,1) C(2,1) C(2,1) C(4,1) = 9504. Then the number of possible racks of undistinguished tiles is the sum of 1/X(R) over all possible racks of distinguished tiles, or C(100,7) times the expectation of 1/X(R).
(Compare if we were trying to find the number of possible one-letter racks, that is, letters; we'd have, say, 9 terms which are each 1/9 corresponding to the labeled tiles A1 through A9, 2 terms each 1/2 corresponding to B1 and B2, and so on. The terms corresponding to each tile type sum to 1, so the sum is just the number of tile types, or 27 including the blank.)
And the expectation of 1/X(R) can be found by sampling. I generated a million racks at random and evaluated 1/X(R) (with some hacked-together Maple code); the expectation of 1/X(R) calculated from this sample is 0.0002014090957, and I approximate the number of possible Scrabble racks to be this quantity times C(10,7), or 3224068. (I'm not interested enough to try to calculate the standard error on this -- and I hope I can get an exact answer.) The number of games of Scrabble you'd have to play before you see every possible rack on the opening play is probably at least 3224068 log 3224068, or around fifty million.
20 August 2007
an interesting fact about a variant of Hex
I have long known about the game of Hex, which is played on a hexagonal lattice. The game is played on a rhombus-shaped section of a hexagonal lattice, with markers of two colors; the board is illustrated at left. One player places red markers, the other blue markers; they alternate in playing. The red player is attempting to create a chain of red hexagons extending from the top of the rhombus to the bottom (these are both shaded red in the screenshot); the blue player is simultaneously attempting to create a chain of blue hexagons from the left side to the right.
There's a nice theorem which states that a game of Hex can never end without a winner. Say you're playing red. If you have blocked off all possible horizontal chains that blue might make, you must have made your own red chain connecting the top and the bottom, and vice versa. (This isn't exactly a proof but it's the gist of any formal proof of the result.)
However, I learned recently from
Bollobas and Riordan's book Percolation that although this is true, an analogous result is not true if one were to play Hex on a square lattice. If we think of one color of hexagons as "open" and the other as "closed", then the earlier result about Hex says that in the rhombus-shaped triangular lattice of points (which is dual to the hexagonal lattice), there's either an open path from the top of the lattice to the bottom or a closed path from the left to the right.
On the square lattice, there is the question of the right way to define adjacency; are two squares which touch each other only at a corner adjacent to each other? It turns out that if we say they are (so each square has eight neighbors) then it's possible to color the squares of the board so that there's a red vertical path and a blue horizontal path. (This doesn't create a problem for a hypothetical game of Hex on a square lattice, though, because we just award the win to the player who completes their path first.) If we let each square have four neighbors, though, then ties are possible; one could have a situation where the entire board is filled in and there's no red vertical path or blue horizontal path. (An example for both of these occurs if we divide the board into four quadrants, and color the entire northeastern and southwestern quadrants red and the northwestern and southeastern quadrants blue.)
But Lemma 5.9 of Bollobas and Riordan tells us that
(I've changed the colors; Bollobas and Riordan use the more traditional black and white.)
However, there's a difference between the two cases, even so. If you color in all the hexagons in an n-by-n section of the hexagonal lattice uniformly at random -- so each is red or blue with probability 1/2, and each hexagon is independent -- then by a symmetry argument, the probability of having a red vertical path and a blue horizontal path are equal, and thus are both 1/2. This doesn't mean Hex is fair -- the first player is in fact guaranteed to win under perfect play. But in the square-lattice version of the game, I suspect the player who is allowed to consider squares which share a corner as adjacent has a very large advantage.
There's a nice theorem which states that a game of Hex can never end without a winner. Say you're playing red. If you have blocked off all possible horizontal chains that blue might make, you must have made your own red chain connecting the top and the bottom, and vice versa. (This isn't exactly a proof but it's the gist of any formal proof of the result.)
However, I learned recently from
Bollobas and Riordan's book Percolation that although this is true, an analogous result is not true if one were to play Hex on a square lattice. If we think of one color of hexagons as "open" and the other as "closed", then the earlier result about Hex says that in the rhombus-shaped triangular lattice of points (which is dual to the hexagonal lattice), there's either an open path from the top of the lattice to the bottom or a closed path from the left to the right.
On the square lattice, there is the question of the right way to define adjacency; are two squares which touch each other only at a corner adjacent to each other? It turns out that if we say they are (so each square has eight neighbors) then it's possible to color the squares of the board so that there's a red vertical path and a blue horizontal path. (This doesn't create a problem for a hypothetical game of Hex on a square lattice, though, because we just award the win to the player who completes their path first.) If we let each square have four neighbors, though, then ties are possible; one could have a situation where the entire board is filled in and there's no red vertical path or blue horizontal path. (An example for both of these occurs if we divide the board into four quadrants, and color the entire northeastern and southwestern quadrants red and the northwestern and southeastern quadrants blue.)
But Lemma 5.9 of Bollobas and Riordan tells us that
If we colour the squares of an n by m chessboard [blue] and [red] in an arbitrary manner, then either a rook can move from the left side ot the right passing only over [blue] squares, or a king can move from top to bottom using only [red] squares, but not both.
(I've changed the colors; Bollobas and Riordan use the more traditional black and white.)
However, there's a difference between the two cases, even so. If you color in all the hexagons in an n-by-n section of the hexagonal lattice uniformly at random -- so each is red or blue with probability 1/2, and each hexagon is independent -- then by a symmetry argument, the probability of having a red vertical path and a blue horizontal path are equal, and thus are both 1/2. This doesn't mean Hex is fair -- the first player is in fact guaranteed to win under perfect play. But in the square-lattice version of the game, I suspect the player who is allowed to consider squares which share a corner as adjacent has a very large advantage.
14 August 2007
Primeshooter
Primeshooter, from 1729.com. You shoot numbers with their prime factors; when you hit a number with one of its prime factors (2, 3, 5, 7, 11, 13, 17, or 19) it is divided by that number. Once you reduce a number to a prime, you can hit it with a "prime" bullet that makes it go away; numbers also go away if you successfully get them down to 1. You lose a life if a number hits you; if a number reaches the bottom of the screen without getting down to 1 it reappears at the top. Sometimes (I don't know when) it appears in a bright red box, in which case you'll lose a life if it reaches the bottom. Sometimes you get an extra life.
I got a score of 834 (= 2 × 3 × 139, although that's irrelevant), which means I successfully shot at 834 prime numbers. I don't recommend trying to beat me, because even if you succeed you will lose about thirty-five minutes of your real life.
The rest of the web page looks kind of fun, too.
An exercise for the reader: why is it fitting that this is on 1729.com?
I got a score of 834 (= 2 × 3 × 139, although that's irrelevant), which means I successfully shot at 834 prime numbers. I don't recommend trying to beat me, because even if you succeed you will lose about thirty-five minutes of your real life.
The rest of the web page looks kind of fun, too.
An exercise for the reader: why is it fitting that this is on 1729.com?
19 July 2007
links: checkers, and the difference between mathematicians and statisticans
Checkers is a draw (from Scientific American); if this is true it's the most complicated game solved by computers to date.
(edit, 4:29 pm: the New York Times has picked up the checkers story. Jonathan Schaeffer, who led the research, says:
By this, it seems that he means that it doesn't "count" as a formal proof because basically what they did was go through all the possibilities that could occur in a game of checkers, without really having any insight into the game. I would still call it a formal proof, as it does establish beyond a doubt that there is a winning strategy for checkers (assuming that no errors slipped in) but I can understand Schaeffer's feeling that what he has done is not mathematics. I fear this sentence might give the wrong impression to laypeople, though.)
Math is like music, statistics is like literature: basically, math and music are both self-contained worlds, while practitioners of statistics and literature both benefit from having life experience. Thus the best mathematicians and musicians are younger than the best statisticians and novelists. (In the comments to this post, I claim that this differs among different branches of mathematics; the conventional wisdom says this is true but I know of no hard evidence.)
(edit, 4:29 pm: the New York Times has picked up the checkers story. Jonathan Schaeffer, who led the research, says:
“It’s a computational proof,” Dr. Schaeffer said. “It’s certainly not a formal mathematical proof.”
By this, it seems that he means that it doesn't "count" as a formal proof because basically what they did was go through all the possibilities that could occur in a game of checkers, without really having any insight into the game. I would still call it a formal proof, as it does establish beyond a doubt that there is a winning strategy for checkers (assuming that no errors slipped in) but I can understand Schaeffer's feeling that what he has done is not mathematics. I fear this sentence might give the wrong impression to laypeople, though.)
Math is like music, statistics is like literature: basically, math and music are both self-contained worlds, while practitioners of statistics and literature both benefit from having life experience. Thus the best mathematicians and musicians are younger than the best statisticians and novelists. (In the comments to this post, I claim that this differs among different branches of mathematics; the conventional wisdom says this is true but I know of no hard evidence.)
Subscribe to:
Comments (Atom)