Showing posts with label television. Show all posts
Showing posts with label television. Show all posts
08 March 2009
A commercial for math
Look, it's a commercial for math!
Okay, so it's really a commercial for IBM. But you don't know that until the very end.
Okay, so it's really a commercial for IBM. But you don't know that until the very end.
17 January 2009
Mathematics Illuminated
While at the laundromat today, I saw an episode of Mathematics Illuminated, about game theory. This is a series of 13 half-hour episodes on "major themes in the field of mathematics"; the game theory episode covered Nash equilibria, the prisoner's dilemma, evolutionarily stable strategies, etc. (I may be leaving out some things, because there was laundry-machine noise.) Their intended audience seems to be high school teachers and interested but perhaps mathematically unsophisticated adult learners.
It appears you can watch the whole series online. The main mathematician involved is Dan Rockmore of Dartmouth.
(And no, I don't know what channel it was on. Like I said, it wasn't my TV.)
It appears you can watch the whole series online. The main mathematician involved is Dan Rockmore of Dartmouth.
(And no, I don't know what channel it was on. Like I said, it wasn't my TV.)
02 December 2008
You can't say you're a liar
From Family Guy:
"Chris, everything I say is a lie. Except that. And that. And that. And that. And that. And that. And that. And that."
"Chris, everything I say is a lie. Except that. And that. And that. And that. And that. And that. And that. And that."
11 November 2008
Combinatorics of universal remotes
From the wrapping a univeral remote I recently bought: "Controls VCR/DVD combos, TV/VCR combos, and TV/DVD combos!"
This is because it controls two devices, and those are the three types of device that people have. But I guess "controls two devices" doesn't work from a marketing standpoint, so they have to list all 3-choose-2 subsets. (And I also find elsewhere on the packaging "manage up to 8 separate devices with one remote control". Something doesn't add up here.)
This is because it controls two devices, and those are the three types of device that people have. But I guess "controls two devices" doesn't work from a marketing standpoint, so they have to list all 3-choose-2 subsets. (And I also find elsewhere on the packaging "manage up to 8 separate devices with one remote control". Something doesn't add up here.)
15 September 2008
Doogie Howser on the Copernican principle
"You never make plans with a girl for further in the future than you've been going out." -- Barney Stinson, the Neil Patrick Harris character on How I Met Your Mother. (Neil Patrick Harris played Doogie Howser on the show by that name.)
This is an example of the "Copernican principle", which I've written about before -- if you assume that there's nothing special about now, you're equally likely to be in the second half of the relationship as the first. And if you're in the second half of the relationship, and you violate this rule, you'll have broken up before the plans happen.
This is an example of the "Copernican principle", which I've written about before -- if you assume that there's nothing special about now, you're equally likely to be in the second half of the relationship as the first. And if you're in the second half of the relationship, and you violate this rule, you'll have broken up before the plans happen.
03 July 2008
Lightning and lotteries
From a rerun of Friends:
Also, Ross is wrong. It seems the record for getting struck by lightning is Roy Sullivan, seven times. So nobody's been hit 42 times, while plenty of people have won the lottery.
I don't know how to calculate the odds that someone gets hit 42 times by lightning in their life; the lifetime incidence of getting hit is three thousand to one, and if you figure that lightning strikes are a Poisson process with rate 1/3000 per lifetime, as this article states, then the probability that lightning hits one person seven times is something like one in (1/3000)7/7!, or one in about 1028. (That's the probability that a Poisson with parameter 1/3000 takes the value exactly 7; I'm ignoring the normalizing factor of exp(1/3000) and the even-more-negligible probability that someone gets hit eight or more times.)
Since the number of people who have existed is much less than 1028, the existence of a person who's been hit seven times is very strong evidence that that's not the right model. My hunch is that events of each person getting hit by lightning are a Poisson process, but with a separate parameter depends on the person. Roy Sullivan was a park ranger.
But the 1 in 3000 figure can't be trusted; the article also claims the annual risk of getting hit by lightning is one in 700,000. People don't live 700,000/3,000 (i. e. 233) years.
Ross: Do you know what your odds are of winning the lottery? You have a better chance of being struck by lightning 42 times.Unsurprisingly, Chandler seems to know that probability doesn't work this way; Joey doesn't.
Chandler: Yes, but there's six of us, so we'd only have to get struck by lightning 7 times.
Joey: I like those odds!
Also, Ross is wrong. It seems the record for getting struck by lightning is Roy Sullivan, seven times. So nobody's been hit 42 times, while plenty of people have won the lottery.
I don't know how to calculate the odds that someone gets hit 42 times by lightning in their life; the lifetime incidence of getting hit is three thousand to one, and if you figure that lightning strikes are a Poisson process with rate 1/3000 per lifetime, as this article states, then the probability that lightning hits one person seven times is something like one in (1/3000)7/7!, or one in about 1028. (That's the probability that a Poisson with parameter 1/3000 takes the value exactly 7; I'm ignoring the normalizing factor of exp(1/3000) and the even-more-negligible probability that someone gets hit eight or more times.)
Since the number of people who have existed is much less than 1028, the existence of a person who's been hit seven times is very strong evidence that that's not the right model. My hunch is that events of each person getting hit by lightning are a Poisson process, but with a separate parameter depends on the person. Roy Sullivan was a park ranger.
But the 1 in 3000 figure can't be trusted; the article also claims the annual risk of getting hit by lightning is one in 700,000. People don't live 700,000/3,000 (i. e. 233) years.
27 May 2008
Abuse of averages
At Freakonomics, they're talking about an advertisement that says that the average termite eats 24 hours a day. (This is an ad for a pest control company.)
Of course, this isn't possible!
Nearly every termite is actually below average; I don't know much about termites but at some point they slack off.
However, the average termite colony might be eating close to 24 hours a day, in that at least one of the termites in your house may be eating at any given moment. And I think this is the point the ad was trying to make -- termites are constantly destroying your house. (Even this might not be true. Do termites sleep at night?)
Of course, this isn't possible!
Nearly every termite is actually below average; I don't know much about termites but at some point they slack off.
However, the average termite colony might be eating close to 24 hours a day, in that at least one of the termites in your house may be eating at any given moment. And I think this is the point the ad was trying to make -- termites are constantly destroying your house. (Even this might not be true. Do termites sleep at night?)
19 March 2008
Early retirement makes you live longer? Or kills you? Who knows?
"Staying on the job five extra years lowers your risk of dying by ten percent." -- on a local TV newscast.
The point here is that early retirement causes one to die earlier, even if you're healthy. That seems believable.
But we all have, of course, a one hundred percent risk of dying.
Presumably they meant that it lowers the risk of dying by ten percent per year. I don't want to try to get more data from that, though, because even that's a big assumption; the numbers in this context might be meaningless.
What I would want is a statement of the form "staying on the job five extra years raises your life expectancy by X years". (For what it's worth, a bit of poking around the internet doesn't give a value for X, but does give the impression that some people think X is negative.)
The point here is that early retirement causes one to die earlier, even if you're healthy. That seems believable.
But we all have, of course, a one hundred percent risk of dying.
Presumably they meant that it lowers the risk of dying by ten percent per year. I don't want to try to get more data from that, though, because even that's a big assumption; the numbers in this context might be meaningless.
What I would want is a statement of the form "staying on the job five extra years raises your life expectancy by X years". (For what it's worth, a bit of poking around the internet doesn't give a value for X, but does give the impression that some people think X is negative.)
04 January 2008
The math of Futurama
Dr. Sarah's Futurama Math, from Sarah Greenwald. Apparently a new Futurama DVD was just recently released, if you care about that sort of thing. (Personally, I like the show but not enough to go out of my way to watch it.) The DVD includes a lecture on the math of Futurama. I didn't know that a lot of the writers of the show had serious mathematical training, but it doesn't surprise me at all.
Also, simpsonsmath.com from Greenwald and Andrew Nestler. I like this one more, because I can get The Simpsons but not Futurama on my dirt-cheap cable package, so the Simpsons references are more current to me. I linked to this one a long time ago, but you probably weren't reading this blog then, because at the time I had maybe one percent of the readers I have now.
(In a not-all-that-strange coincidence, I'm reading William Poundstone's biography of Carl Sagan. Sagan was born in 1934, and often cited the "real" Futurama, a pavillion at the 1939 New York World's Fair, as one of the first things that pushed him towards being a scientist.)
Also, simpsonsmath.com from Greenwald and Andrew Nestler. I like this one more, because I can get The Simpsons but not Futurama on my dirt-cheap cable package, so the Simpsons references are more current to me. I linked to this one a long time ago, but you probably weren't reading this blog then, because at the time I had maybe one percent of the readers I have now.
(In a not-all-that-strange coincidence, I'm reading William Poundstone's biography of Carl Sagan. Sagan was born in 1934, and often cited the "real" Futurama, a pavillion at the 1939 New York World's Fair, as one of the first things that pushed him towards being a scientist.)
Labels:
Futurama,
Greenwald,
humor,
Simpsons,
television
30 November 2007
Degree Clinical Protection
A commercial for "Degree Clinical Protection" deodorant said: "Do you know one in four people think they sweat more than normal?"
So another one in four people are in denial, since the number of people who sweat more than normal is probably about half. (I'm assuming that "sweating more than normal" is the sort of thing one would deny, which seems reasonable, at least in the perspective of a deodorant commercial.)
I'm reminded of the often-quoted fact that three-quarters of all incoming students at [insert prestigious university here] think they're going to be in the top quarter of their incoming fact.
(The amazon.com page has a description beginning "Did you know one in four Americans worry about excessive sweating?", which I don't have a problem with, because the word "excessive" doesn't have the same statistical connotations.)
So another one in four people are in denial, since the number of people who sweat more than normal is probably about half. (I'm assuming that "sweating more than normal" is the sort of thing one would deny, which seems reasonable, at least in the perspective of a deodorant commercial.)
I'm reminded of the often-quoted fact that three-quarters of all incoming students at [insert prestigious university here] think they're going to be in the top quarter of their incoming fact.
(The amazon.com page has a description beginning "Did you know one in four Americans worry about excessive sweating?", which I don't have a problem with, because the word "excessive" doesn't have the same statistical connotations.)
05 October 2007
Probability on Jeopardy!
No, this isn't about the probability of winning at Jeopardy or any such thing.
On Wednesday night's Jeopardy!, there was a category "Fun With Probability". It's always funny to see math categories appear on the show, because the contestants seem to avoid them. (They shy away from science categories as well, but not to the same extent; you can fake your way through a science category by having memorized a bunch of things, but you can't do that with a math category. By the way, any puns on the word "category" I might make in this post are unintentional.)
You can see the entire game at the Jeopardy! archive. The questions were as follows:
Contestants answered "36" and "32". I think "32" was a wild guess. But "36" is actually a fairly reasonable answer here. The odds of hitting your lucky number on a single-zero wheel are, in fact, 36 to 1. A single-zero wheel has thirty-seven spots -- the zero and the numbers one through 36 -- and a bet of 1ドル pays 36ドル if your number comes up. The house edge here is just 1/37, as opposed to 2/38 on the wheels with both zero and double-zero. (Is there a form of roulette where there are no zeroes? This seems like it could exist if not played in a casino. Poker is a fair game when played socially but in casinos the house takes a percentage of each pot. Then again, I don't see people getting together socially to spin a wheel and hand each other money; poker involves infinitely more skill than roulette. I mean the word "infinitely" here literally, because roulette takes zero skill.)
One in thirty-two, of course; thirty-two is 25. (Trick question that I might give if I ever find myself teaching basic probability: the odds of getting heads on a given coin flip of a certain unfair coin are 2 to 1 against. What are the odds of getting heads five times in a row? The answer is 242 to 1 against, but I suspect a lot of people would hear "coin" and just start multiplying out the twos.)
Not really a probability question; you really just had to kind of guess your way through this one. There are three reasonable objects -- comets, meteors, and asteroids. The three contestants, in turn, guessed all three possible pairs of them; binomial coefficients in action! (There are plenty of Jeopardy! clues where there are three possible answers, and the best strategy seems to be to wait for the other two people to get the wrong answer. There are also a fairly large number where there are two possible answers; Sweden/Norway and Oxford/Cambridge seem like common pairs of this sort. At least for me.) If you know a bit of astronomy, though, you know that meteors are not that big, and hit the earth all the time in meteor showers.
A lot of Jeopardy! clues have a bunch of extraneous verbiage; the question here is really "what's the kind of allele that's not dominant?", which might be surprisingly easy for a 1600ドル clue. But at this point I feel obliged to mention that genetics is one of the other disciplines where probability and statistics were applied early on, or so a friend of mine tells me; I suppose this is more reputable than gambling and less boring than insurance.
(There was a video here; it was pretty clear that this was the probability of the second card being an ace, given that the first card also was an ace.) There are three aces left among the fifty-one remaining cards, so it's three in 51, which is one in 17. I was screaming at the TV -- none of the contestants got it -- but then again I do this stuff for a living, and as far as I know none of them do, so I really shouldn't be too hard on them. Incidentally, this is a "baby version" (as one of my the principle at work in card counting in blackjack; if you know the deck is rich in high cards then the dealer is more likely to bust, so you bet more.
On Wednesday night's Jeopardy!, there was a category "Fun With Probability". It's always funny to see math categories appear on the show, because the contestants seem to avoid them. (They shy away from science categories as well, but not to the same extent; you can fake your way through a science category by having memorized a bunch of things, but you can't do that with a math category. By the way, any puns on the word "category" I might make in this post are unintentional.)
You can see the entire game at the Jeopardy! archive. The questions were as follows:
400ドル: High rollers get to play roulette on single-zero wheels, where the chance of hitting your lucky number is 1 in this
Contestants answered "36" and "32". I think "32" was a wild guess. But "36" is actually a fairly reasonable answer here. The odds of hitting your lucky number on a single-zero wheel are, in fact, 36 to 1. A single-zero wheel has thirty-seven spots -- the zero and the numbers one through 36 -- and a bet of 1ドル pays 36ドル if your number comes up. The house edge here is just 1/37, as opposed to 2/38 on the wheels with both zero and double-zero. (Is there a form of roulette where there are no zeroes? This seems like it could exist if not played in a casino. Poker is a fair game when played socially but in casinos the house takes a percentage of each pot. Then again, I don't see people getting together socially to spin a wheel and hand each other money; poker involves infinitely more skill than roulette. I mean the word "infinitely" here literally, because roulette takes zero skill.)
800ドル: The chance of getting heads on any given coin flip is 1 in 2, so the chance of getting heads 5 times in a row is 1 in this
One in thirty-two, of course; thirty-two is 25. (Trick question that I might give if I ever find myself teaching basic probability: the odds of getting heads on a given coin flip of a certain unfair coin are 2 to 1 against. What are the odds of getting heads five times in a row? The answer is 242 to 1 against, but I suspect a lot of people would hear "coin" and just start multiplying out the twos.)
1200ドル: NASA's Spaceguard Survey watches for the "extremely small" probability of these 2 objects coming to smash Earth
Not really a probability question; you really just had to kind of guess your way through this one. There are three reasonable objects -- comets, meteors, and asteroids. The three contestants, in turn, guessed all three possible pairs of them; binomial coefficients in action! (There are plenty of Jeopardy! clues where there are three possible answers, and the best strategy seems to be to wait for the other two people to get the wrong answer. There are also a fairly large number where there are two possible answers; Sweden/Norway and Oxford/Cambridge seem like common pairs of this sort. At least for me.) If you know a bit of astronomy, though, you know that meteors are not that big, and hit the earth all the time in meteor showers.
1600ドル: Offspring of heterozygous parents have a 50-50 chance of getting a dominant vs. this type of allele
A lot of Jeopardy! clues have a bunch of extraneous verbiage; the question here is really "what's the kind of allele that's not dominant?", which might be surprisingly easy for a 1600ドル clue. But at this point I feel obliged to mention that genetics is one of the other disciplines where probability and statistics were applied early on, or so a friend of mine tells me; I suppose this is more reputable than gambling and less boring than insurance.
2000ドル: The probability of the first card dealt being an ace is 4 in 52, so the probability of the second card being an ace is 1 in this number
(There was a video here; it was pretty clear that this was the probability of the second card being an ace, given that the first card also was an ace.) There are three aces left among the fifty-one remaining cards, so it's three in 51, which is one in 17. I was screaming at the TV -- none of the contestants got it -- but then again I do this stuff for a living, and as far as I know none of them do, so I really shouldn't be too hard on them. Incidentally, this is a "baby version" (as one of my the principle at work in card counting in blackjack; if you know the deck is rich in high cards then the dealer is more likely to bust, so you bet more.
27 August 2007
scalene triangles on Google
Would you believe that the #1 "hot trend" on Google Trends yesterday was scalene triangle?
I am not making this up.
The Google Trends page for yesterday is actually feeding me a fair bit of traffic right now, to this page in which I solved an unrelated puzzle about scalene triangles.
This would be mystifying to me, but there was a question on "Are You Smarter Than A 5th Grader" last night asking how many angles in a scalene triangle are the same. (Incidentally, I'm not entirely sure whether the answer is zero or one. All the angles are different. There was another question on that show that had two possible answers -- they asked for the world's longest river, whether the world's longest river is the Nile or the Amazon depends on how you measure.)
The #2 "hot trend" yesterday was "mars moons", and #8 was "feet in a mile"; "proper noun" is #21, and "worlds longest river" [sic] is #38. These also related to questions on that show.
The peak time for this search was 4 PM, which seems mystifying until you realize that Google uses Pacific time; this is 7 PM Eastern, which is the time the show aired in the East. Indeed, the four hours when "scalene triangle" was most searched were 4 PM, 7 PM, 5 PM, and 6 PM Pacific (in that order); these are, of course, 7 PM Eastern, Pacific, Central, and Mountain, respectively, which I suspect is the ranking of U. S. time zones from most to least populated. I suspect that it's possible to distinguish between trends that occur because of something on TV and trends that occur because of something that happens in the "real world" (i. e. a breaking news story) by looking at this; breaking news stories should show a single peak, while TV-inspired searches should show a large peak and then a small peak three hours later.
From what I can gather from about Google Trends, the quantity they're ranking is the number of searches done on a particular query in the day in question divided by the number of searches done on a "typical" day. My suspicion is that the various questions on the show were equally likely to be Googled, but more people Google for river lengths or grammatical terminology than scalene triangles on a normal day.
I am not making this up.
The Google Trends page for yesterday is actually feeding me a fair bit of traffic right now, to this page in which I solved an unrelated puzzle about scalene triangles.
This would be mystifying to me, but there was a question on "Are You Smarter Than A 5th Grader" last night asking how many angles in a scalene triangle are the same. (Incidentally, I'm not entirely sure whether the answer is zero or one. All the angles are different. There was another question on that show that had two possible answers -- they asked for the world's longest river, whether the world's longest river is the Nile or the Amazon depends on how you measure.)
The #2 "hot trend" yesterday was "mars moons", and #8 was "feet in a mile"; "proper noun" is #21, and "worlds longest river" [sic] is #38. These also related to questions on that show.
The peak time for this search was 4 PM, which seems mystifying until you realize that Google uses Pacific time; this is 7 PM Eastern, which is the time the show aired in the East. Indeed, the four hours when "scalene triangle" was most searched were 4 PM, 7 PM, 5 PM, and 6 PM Pacific (in that order); these are, of course, 7 PM Eastern, Pacific, Central, and Mountain, respectively, which I suspect is the ranking of U. S. time zones from most to least populated. I suspect that it's possible to distinguish between trends that occur because of something on TV and trends that occur because of something that happens in the "real world" (i. e. a breaking news story) by looking at this; breaking news stories should show a single peak, while TV-inspired searches should show a large peak and then a small peak three hours later.
From what I can gather from about Google Trends, the quantity they're ranking is the number of searches done on a particular query in the day in question divided by the number of searches done on a "typical" day. My suspicion is that the various questions on the show were equally likely to be Googled, but more people Google for river lengths or grammatical terminology than scalene triangles on a normal day.
28 July 2007
baseball commentators say silly things
Heard just now on FOX, which is airing the Braves-Diamondbacks game:
First, the TV commentator claims that the Diamondbacks are a very streaky team this year, because they've had three separate five-game losing streaks and have won their last seven.
In fact, the Diamondbacks have won 57 games out of 105, for a winning "percentage" of 0.543; thus their probability of losing five straight games is (1-57/105)5 = 0.0200. They've played one hundred and five games so far, so there are 101 games in which they could have started a five-game losing streak; thus their expected number of five-game losing streaks is something like (0.0200)(101) = 2.01. (Yes, that's right; the figure 0.0200 is rounded.) So it's not all that surprising that they've had three such streaks.
Similarly, the expected number of seven-game winning streaks is 99(57/105)7 = 1.37; the fact that the Diamondbacks have had one such streak is not at all surprising. (If I had to guess, I'd say that 1 is actually the most likely number of such streaks, but I'm not interested enough to do the analysis.)
Of course, not every game is independent. A more sophisticated analysis would take into account which teams were playing, and so on. An even more sophisticated analysis of streaks in baseball ought to take into account the pitching rotation; the existence of a pitching rotation reduces the likelihood of streaks. Let's say your team wins one-half of its games; then the probability of winning five straight games is 0.03125. But now say you have five starting pitchers, and your team wins in 70%, 60%, 50%, 40%, and 30% of their games respectively. If each pitcher pitches every fifth game, then the probability of winning five consecutive games is now (0.7)(0.6)(0.5)(0.4)(0.3) = 0.0252.
See The Hot Hand in Sports for more of this sort of analysis.
Second, the Braves are, according to the television guy, "exactly one percentage point" behind the Phillies. The Braves are 54-50 going into today's play; the Phillies are 53-49. Baseball winning "percentages" are conventionally reported to three decimal places; the Braves are at .519, the Phillies at .520. For those of you who don't know, it's conventional to say that one team is ahead of the other by "percentage points" in a situation such as this where both teams have the same difference between their number of wins and number of losses; in this case both teams have won four more games than they've lost. But what bothers me is the "exactly one" here; of course those figures are rounded. As it turns out, the Braves' winning percentage is 0.519230...; the Phillies; is 0.519608...; the difference is 0.000377..., or not even half a point. If baseball truncated winning percentages, instead of rounding them, the two teams would be "tied".
The first of these things -- the streakiness comment -- is the one that bothers me more, though. The "percentage points" comment is just a matter of a convention that disagrees with the one the rest of the world makes. (Why doesn't baseball report winning percentages to just two decimal places? Because that wouldn't be enough accuracy; baseball teams play 162 games a season.) But the streakiness comment is the sort of thing that shows that people don't understand the nature of randomness; people read something into "streaks" that is really just good luck.
First, the TV commentator claims that the Diamondbacks are a very streaky team this year, because they've had three separate five-game losing streaks and have won their last seven.
In fact, the Diamondbacks have won 57 games out of 105, for a winning "percentage" of 0.543; thus their probability of losing five straight games is (1-57/105)5 = 0.0200. They've played one hundred and five games so far, so there are 101 games in which they could have started a five-game losing streak; thus their expected number of five-game losing streaks is something like (0.0200)(101) = 2.01. (Yes, that's right; the figure 0.0200 is rounded.) So it's not all that surprising that they've had three such streaks.
Similarly, the expected number of seven-game winning streaks is 99(57/105)7 = 1.37; the fact that the Diamondbacks have had one such streak is not at all surprising. (If I had to guess, I'd say that 1 is actually the most likely number of such streaks, but I'm not interested enough to do the analysis.)
Of course, not every game is independent. A more sophisticated analysis would take into account which teams were playing, and so on. An even more sophisticated analysis of streaks in baseball ought to take into account the pitching rotation; the existence of a pitching rotation reduces the likelihood of streaks. Let's say your team wins one-half of its games; then the probability of winning five straight games is 0.03125. But now say you have five starting pitchers, and your team wins in 70%, 60%, 50%, 40%, and 30% of their games respectively. If each pitcher pitches every fifth game, then the probability of winning five consecutive games is now (0.7)(0.6)(0.5)(0.4)(0.3) = 0.0252.
See The Hot Hand in Sports for more of this sort of analysis.
Second, the Braves are, according to the television guy, "exactly one percentage point" behind the Phillies. The Braves are 54-50 going into today's play; the Phillies are 53-49. Baseball winning "percentages" are conventionally reported to three decimal places; the Braves are at .519, the Phillies at .520. For those of you who don't know, it's conventional to say that one team is ahead of the other by "percentage points" in a situation such as this where both teams have the same difference between their number of wins and number of losses; in this case both teams have won four more games than they've lost. But what bothers me is the "exactly one" here; of course those figures are rounded. As it turns out, the Braves' winning percentage is 0.519230...; the Phillies; is 0.519608...; the difference is 0.000377..., or not even half a point. If baseball truncated winning percentages, instead of rounding them, the two teams would be "tied".
The first of these things -- the streakiness comment -- is the one that bothers me more, though. The "percentage points" comment is just a matter of a convention that disagrees with the one the rest of the world makes. (Why doesn't baseball report winning percentages to just two decimal places? Because that wouldn't be enough accuracy; baseball teams play 162 games a season.) But the streakiness comment is the sort of thing that shows that people don't understand the nature of randomness; people read something into "streaks" that is really just good luck.
probabilities in "Set for Life"
On Friday, July 20, on ABC, a program called Set For Life premiered; another episode aired last night.
It's July. It's a prime-time game show. As you may have guessed, this is the stupidest game show ever. By "stupidest" I don't mean "the show is a bad idea" -- although I think it is, and it probably won't last long, nobody ever went broke by betting on the stupidity of the American public. But what I mean is that it involves absolutely no skill.
It reminds me of Deal or No Deal, in that no skill is involved except the still of knowing when to stop. In fact, the New York Times compared this show to Deal or No Deal back in October, writing: "On “Deal or No Deal,” Mr. Mandel does not even pretend that skill is involved. He says upfront that his game is about “giving away a ton of money,” not winning a ton of money."
Well, this is in the same vein. Here's how it works. First, there's a round that I didn't see, in which somehow it is determined how much the player will win per month; I don't know how they do this. (The Wikipedia article implies it doesn't air; people on various game show forums, like this one, seem to think that this part is probably more interesting than what actually airs. I agree, because what actually airs is less interesting than staring out my window. To be fair, staring out my window is pretty interesting.) Then, in the second round, it's determined how long the player will receive this monthly amount for. There are fifteen lights embedded in the stage; eleven are white, four are red. The player picks a light, and if it's white they go one step "up the ladder"; if it's red they go one step "down the ladder". After picking a white light, the player can elect to get out of the game and keep the money; after a red light, they are not allowed to make this choice. If they pick all four red lights, they go home with no money. The ladder contains the following time amounts:
zero, 1 month, 6 months, 1 year, 2 years, 3 years, 5 years, 10 years, 15 years, 20 years, 25 years, "set for life" (40 years).
(It's not clear to me what happens if you pick a red light on the first step. This turns out not to matter in my analysis.)
They have someone in the audience (in the first episode it was the contestant's nephew) giving "advice" on which lights to pick, and there's the pretense that some level of "skill" is involved in picking which lights are going to be white and which are going to be red, but there's actually no pattern at all. So it's really just a game of chicken. There's something of a morality play embedded in this show -- sometimes greed is good, but sometimes it can backfire and you get screwed over.
(The show's gimmick is that there's someone in an "isolation pod" and they can decide to end the game as well, but the person outside doesn't know when that's happened; the person in the isolation pod can choose to stop the game; but I'll ignore this.)
At least with Who Wants To Be A Millionaire?, which started this whole modern game-show craze, some knowledge was required. What I like about the new shows is the amount of drama that's invested in them -- each "decision" to keep going comes with dramatic music, as if it Really Matters. As if the players had any sort of control over the game. But for some reason the format of uncovering lights that have already been set works a lot better, on TV, than if there were just a random number generator simulating the light picking. For example, if there were four red lights left and seven white lights left, it would say "you win" with probability 7/11 and "you lose" with probability 4/11.
The first question that comes to mind -- what is the probability that a player wins the 40 years of monthly checks, given that they don't decide to "chicken out" at some mearlier step? To win this, the player has to pick all eleven white lights and no red lights, The probability of this is 1/C(15,4) = 1/1365.
Now, what's the right "strategy" for this game, in terms of trying to maximize your expected winnings? There are, at any point after you've picked a white light, two things you can do -- stop or keep going. If you "keep going", then we want to know what the probability of you being at various points on the ladder after Let's say, for example, that you've found six white lights and two red lights so far, so you're at the "2 years" position on the ladder, and five white lights and two red lights remain. You pick a light at random. The probability is 5/7 that it's white, and so now you end up with "3 years" winnings, 2/7 * 5/6 = 10/42 that you pick a red light and then a white light and end up back where you began, and 2/7 * 1/6 = 2/42 that you pick two red lights and lose everything. So the expected winnings (in years) are 3(5/7) + 2(10/42), which is greater than 2, so it's a good bet.
This same sort of reasoning works for any combination of lights. If you've so far picked zero to seven white lights and zero red lights, it turns out to be the right move to keep going. If you've picked eight white lights and zero red lights -- thus being at the "15 years" level -- staying or going are both equally good. If you've picked nine white lights and zero red lights, always stay; there are only two white lights and four red lights left! Similarly:
The usual caveat applies; calculating "expected value" is kind of meaningless when you only get to play once. People tend to be risk-averse with their winnings, so I'd expect to see people stopping before this strategy dictates it. For example, if two red lights and nine white lights have already been picked (leaving two of each), I don't see people taking the one-in-six risk of picking both of those red lights and losing it all.
It's July. It's a prime-time game show. As you may have guessed, this is the stupidest game show ever. By "stupidest" I don't mean "the show is a bad idea" -- although I think it is, and it probably won't last long, nobody ever went broke by betting on the stupidity of the American public. But what I mean is that it involves absolutely no skill.
It reminds me of Deal or No Deal, in that no skill is involved except the still of knowing when to stop. In fact, the New York Times compared this show to Deal or No Deal back in October, writing: "On “Deal or No Deal,” Mr. Mandel does not even pretend that skill is involved. He says upfront that his game is about “giving away a ton of money,” not winning a ton of money."
Well, this is in the same vein. Here's how it works. First, there's a round that I didn't see, in which somehow it is determined how much the player will win per month; I don't know how they do this. (The Wikipedia article implies it doesn't air; people on various game show forums, like this one, seem to think that this part is probably more interesting than what actually airs. I agree, because what actually airs is less interesting than staring out my window. To be fair, staring out my window is pretty interesting.) Then, in the second round, it's determined how long the player will receive this monthly amount for. There are fifteen lights embedded in the stage; eleven are white, four are red. The player picks a light, and if it's white they go one step "up the ladder"; if it's red they go one step "down the ladder". After picking a white light, the player can elect to get out of the game and keep the money; after a red light, they are not allowed to make this choice. If they pick all four red lights, they go home with no money. The ladder contains the following time amounts:
zero, 1 month, 6 months, 1 year, 2 years, 3 years, 5 years, 10 years, 15 years, 20 years, 25 years, "set for life" (40 years).
(It's not clear to me what happens if you pick a red light on the first step. This turns out not to matter in my analysis.)
They have someone in the audience (in the first episode it was the contestant's nephew) giving "advice" on which lights to pick, and there's the pretense that some level of "skill" is involved in picking which lights are going to be white and which are going to be red, but there's actually no pattern at all. So it's really just a game of chicken. There's something of a morality play embedded in this show -- sometimes greed is good, but sometimes it can backfire and you get screwed over.
(The show's gimmick is that there's someone in an "isolation pod" and they can decide to end the game as well, but the person outside doesn't know when that's happened; the person in the isolation pod can choose to stop the game; but I'll ignore this.)
At least with Who Wants To Be A Millionaire?, which started this whole modern game-show craze, some knowledge was required. What I like about the new shows is the amount of drama that's invested in them -- each "decision" to keep going comes with dramatic music, as if it Really Matters. As if the players had any sort of control over the game. But for some reason the format of uncovering lights that have already been set works a lot better, on TV, than if there were just a random number generator simulating the light picking. For example, if there were four red lights left and seven white lights left, it would say "you win" with probability 7/11 and "you lose" with probability 4/11.
The first question that comes to mind -- what is the probability that a player wins the 40 years of monthly checks, given that they don't decide to "chicken out" at some mearlier step? To win this, the player has to pick all eleven white lights and no red lights, The probability of this is 1/C(15,4) = 1/1365.
Now, what's the right "strategy" for this game, in terms of trying to maximize your expected winnings? There are, at any point after you've picked a white light, two things you can do -- stop or keep going. If you "keep going", then we want to know what the probability of you being at various points on the ladder after Let's say, for example, that you've found six white lights and two red lights so far, so you're at the "2 years" position on the ladder, and five white lights and two red lights remain. You pick a light at random. The probability is 5/7 that it's white, and so now you end up with "3 years" winnings, 2/7 * 5/6 = 10/42 that you pick a red light and then a white light and end up back where you began, and 2/7 * 1/6 = 2/42 that you pick two red lights and lose everything. So the expected winnings (in years) are 3(5/7) + 2(10/42), which is greater than 2, so it's a good bet.
This same sort of reasoning works for any combination of lights. If you've so far picked zero to seven white lights and zero red lights, it turns out to be the right move to keep going. If you've picked eight white lights and zero red lights -- thus being at the "15 years" level -- staying or going are both equally good. If you've picked nine white lights and zero red lights, always stay; there are only two white lights and four red lights left! Similarly:
- if one red light has already been picked, and eight or less white lights have been picked, it makes sense to keep going; if nine or more white lights have been picked, stop.
- if two red lights have been picked, go if nine white lights or less have been picked; stop if ten white lights have been picked.
- if three red lights have been picked, keep going if seven or less white lights have been picked; if eight or nine have been picked, staying and going are equally valuable; if ten white lights have been picked, stop.
The usual caveat applies; calculating "expected value" is kind of meaningless when you only get to play once. People tend to be risk-averse with their winnings, so I'd expect to see people stopping before this strategy dictates it. For example, if two red lights and nine white lights have already been picked (leaving two of each), I don't see people taking the one-in-six risk of picking both of those red lights and losing it all.
Subscribe to:
Comments (Atom)