tag:blogger.com,1999:blog-19200881355807765742014-10-06T21:20:33.725-07:00Ask Doctor MathAll Things Mathematicaldrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-1920088135580776574.post-30065364993029177652009-04-14T21:21:00.000-07:002009-04-14T23:31:29.244-07:00Nothing is Certain but Death and Logarithms<i>Dear Dr. Math,<br />I've heard that if I wanted to, ahem, "creatively adjust" some numbers, I should use numbers that start with the digit 1 more often. Why is that?<br />Inquiring Re. Statistics<br /></i><br />Dear IRS,<br /><br />How timely of you to bring this up! Indeed, there is a general pattern in the digits typically found in measured quantities, especially those spanning many orders of magnitude, for example: populations of cities, distances between stars, or, say, ADJUSTED GROSS INCOME. The pattern is that the digit 1 occurs more often as the leading digit of the number, approximately 30% of the time, followed by the digit 2 about 18% of the time, and so on. The probability, in fact, of having a leading digit equal to <i>d</i> is equal to <i>log(1+1/d)</i>, for any <i>d</i> =1,2,...,9. This rule is called <a href="http://en.wikipedia.org/wiki/Benford%27s_law">Benford's Law</a>, named (as is often the case) for the second person to discover it, when he noticed that the pages of the library's book of logarithms were much dirtier, hence more used, at the front of the book where the numbers began with 1. In pictures, the distribution of digits looks like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_zy58RlQeD60/SeVib4GMZxI/AAAAAAAAAJc/pPAL2avpehE/s1600-h/compPlot1a.gif"><img style="cursor: pointer; width: 400px; height: 287px;" src="http://3.bp.blogspot.com/_zy58RlQeD60/SeVib4GMZxI/AAAAAAAAAJc/pPAL2avpehE/s400/compPlot1a.gif" alt="" id="BLOGGER_PHOTO_ID_5324770365489833746" border="0" /></a><br /><br />It seems counterintuitive that any digit should be more likely than any other. After all, if we pick a number "at random," shouldn't it have the same probability of being between 100 and 199 as it does of being between 200 and 299, etc.? If so, the probability of getting a 1 as the first digit would in fact be the same as getting a 2. However, this turns out to be impossible, and it has to do with a very common misconception about "randomness."<br /><br />The fact of the matter is that there's actually <i>no way to pick a number uniformly at random without further restrictions</i>. So, for example, if I tell you to "pick a random number," it must be the case that you're more likely to select some particular number than some other (which ones, however, are up to you.) Assume this weren't true, so all numbers are equally likely. Just to be clear, let's focus on the positive integers, the numbers 1,2,3,... Now let <i>p </i>be the probability of picking any one of them, say the number 1. Since they're all supposedly equally likely, this means <i>p </i>is also the probability of picking 2, and of picking 3, and so on. So the chance of picking any number between 1 and 10, say, is 10*<i>p</i>. Since probabilities are always less than 1, this means <i>p</i> < 1/10. OK, well, by the same reasoning, the probability of picking a number between 1 and 1000 is 1000*<i>p</i>, so <i>p</i> < 1/1000. Similarly, <i>p</i> < 1/1,000,000, <i>p</i> < 1/(<a href="http://doctormath.blogspot.com/2009/02/how-big-is-that-number-episode-1.html">1 googol</a>), and so on. In fact, it follows that <i> p < </i> 1/<i>N</i> for any <i>N</i>, and the only (non-negative) number that has that property is <i>p = </i>0. Ergo, the chance of getting any particular integer is 0, from which it follows (<a href="http://en.wikipedia.org/wiki/Countable_additivity">for reasons I won't get into here</a>) that the probability of picking an integer <i>at all</i> is 0, a "contradiction." That's math-speak for "whoops." You can only pick an integer uniformly from a <i>finite</i> <i>set</i> <i>of possibilities</i>.<br /><br />So, what do we mean when we say that a number is "random"? Well, there are ways for things to be random without being uniformly random. For example, if you roll a pair of dice, you might say the outcome is "<a href="http://doctormath.blogspot.com/2009/02/lets-make-deal-or-no-deal.html">random</a>," but you know that the sum is more likely to be 7 than it is to be 2. Similarly, if you pick a person (uniformly) randomly from the population of the U.S. (note: the population is finite, so that's OK), you might model his/her IQ as a random quantity with a <a href="http://en.wikipedia.org/wiki/Normal_distribution">normal distribution</a>, a.k.a. a "bell curve," centered around 100. The existence of different distributions besides the uniform distribution is the source of a lot of popular misunderstandings about statistics.<br /><br />None of that explains where Benford's Law comes from, of course, but it's at least an argument why it's <i>plausible</i> that the distribution isn't uniform. To explain the appearance of the <i>particular</i> logarithmic distribution of digits I wrote above, we'd need some kind of model for the quantities we were observing, and it can't just be "the uniform distribution on the positive integers," because we already showed that there's no such thing.<br /><br />One reasonable idea is that the thing we're measuring might be "scale invariant." That is, if it has a wide range of possible values, it might not matter what size units we use to measure it--we'll get roughly the same distribution of numbers. So if we imagine switching from measuring lengths in feet to measuring them in "half-feet,"* say, then anything that gave us a foot-length starting with 1, say 1.2 feet or 1.8 feet, will now give us a half-foot length starting either with 2 or 3, in this case 2.4 and 3.6 "half-feet." If the two distributions are the same, then the occurrence of a first-digit 1 must be the same as the occurrence of a first-digit 2 or 3, combined. By the same reasoning, any quantity initially beginning with a 5, 6, 7, 8, or 9 would now begin with a 1, when doubled. Similarly, by tripling the scale, measuring in "third-feet" and assuming the same invariance, we'd get a 1 as often as a 3, 4, or 5 put together. And so on. By considering every possible scale, this line of reasoning leads you pretty much straight to Benford's Law. This scale invariance kind of makes sense if we're measuring ADJUSTED GROSS INCOME, since incomes vary by so much (so very, very much), whereas something like height wouldn't exhibit scale invariance, being more tightly distributed around its mean.<br /><br />Another perspective is that when we measure things, we're frequently observing something in the midst of an <i>exponential growth</i>. Exponential growth happens all the time in nature, for example, in the sizes of <a href="http://doctormath.blogspot.com/2009/02/what-mean-means.html">populations</a> or SECRET OFFSHORE BANK ACCOUNTS with a fixed (compound) interest rate. The key feature of a quantity growing exponentially is that it has a fixed "doubling time." That is, the amount of time it takes to grow by a factor of 2 is independent of how big it is currently. For example, let's assume your illegal bank account (well not <i>yours</i>, but <i>one's</i>) doubles in value every year and starts off with a balance of $1000. At the end of year 1, you'd have $2000, at the end of year 2 you'd have $4000, at the end of year 3 you'd have $8000, and so on. So for the whole first year, your bank balance would start with the digit 1, but during the second year you would have some balances starting with 2 and some with 3. During the third year, you would have balances starting with 4, 5, 6, and 7. If we AUDITED your account at some randomly chosen time, we'd be just as likely to see a balance starting with 1 as a balance starting with 2 or 3, combined, and so on. In other words, we have the same "scale invariance" conditions as before, which lead us back to Benford's Law. The same would be true no matter how quickly the account grew; exponential growth sampled at a random time gives us a logarithmic distribution of digits.<br /><br />To give you a concrete example, I went through the first 100 powers of 2--1, 2, 4, 8, 16, ...**--and instructed my computer to keep track of just the first digits. The results, as you can see, conform pretty nicely to Benford's Law:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SeVhYcTlrZI/AAAAAAAAAJM/ccdah4qzKqw/s1600-h/powersof2.JPG"><img style="cursor: pointer; width: 400px; height: 240px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SeVhYcTlrZI/AAAAAAAAAJM/ccdah4qzKqw/s400/powersof2.JPG" alt="" id="BLOGGER_PHOTO_ID_5324769206978588050" border="0" /></a><br /><br />For whatever reason, it appears that Benford's Law, like TAX LAW, is the law.<br /><br />-DrM<br /><br /><br />*Sounds vaguely <a href="http://doctormath.blogspot.com/2009/02/in-hole-in-grou31aadnm-vnatoh424.html">Tolkienesque</a>, don't you think?<br /><br />**32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608, 16777216, 33554432, 67108864, 134217728, 268435456, 536870912, 1073741824, 2147483648, 4294967296, 8589934592, 17179869184, 34359738368, 68719476736, 137438953472, 274877906944, 549755813888, 1099511627776, 2199023255552, 4398046511104, 8796093022208, 17592186044416, 35184372088832, 70368744177664, 140737488355328, 281474976710656, 562949953421312, 1125899906842624, 2251799813685248, 4503599627370496, 9007199254740992, 18014398509481984, 36028797018963968, 72057594037927936, 144115188075855872, 288230376151711744, 576460752303423488, 1152921504606846976, 2305843009213693952, 4611686018427387904, 9223372036854775808, 18446744073709551616, 36893488147419103232, 73786976294838206464, 147573952589676412928, 295147905179352825856, 590295810358705651712, 1180591620717411303424, 2361183241434822606848, 4722366482869645213696, 9444732965739290427392, 18889465931478580854784, 37778931862957161709568, 75557863725914323419136, 151115727451828646838272, 302231454903657293676544, 604462909807314587353088, 1208925819614629174706176, 2417851639229258349412352, 4835703278458516698824704, 9671406556917033397649408, 19342813113834066795298816, 38685626227668133590597632, 77371252455336267181195264, 154742504910672534362390528, 309485009821345068724781056, 618970019642690137449562112, 1237940039285380274899124224, 2475880078570760549798248448, 4951760157141521099596496896, 9903520314283042199192993792, 19807040628566084398385987584, 39614081257132168796771975168, 79228162514264337593543950336, 158456325028528675187087900672, 316912650057057350374175801344, 633825300114114700748351602688drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com11tag:blogger.com,1999:blog-1920088135580776574.post-88467769792674846392009-03-31T14:27:00.000-07:002009-03-31T14:47:36.184-07:00Elevator Action<i>Dear Dr. Math,<br />How do I figure out whether to take the elevator or the stairs? The elevator is faster, when it comes, but sometimes I think I end up waiting longer than it would have taken me to just walk.<br />Regards,<br />StairMaster<br /></i><br /><br />Excellent question, SM, but we need to clarify what criteria we're using to decide between the two options. Some people might generally prefer the stairs because they enjoy the exercise, or maybe they're worried about the possibility of <a href="http://www.youtube.com/watch?v=p_bMhNI_TY8" target="_blank">being stuck alone in a metal box for 41 hours</a>. Some people like the elevator because it's more social, and you can do that thing where you jump at the very end and it feels like you're floating. But from the way you asked, I'm assuming you're just trying to minimize time, and that's all you care about. By the way, what's the big hurry? Take time to enjoy the little things, SM; they're all we have.<br /><br />To actually answer the question, we need a model for all of the relevant quantities that we care about. To keep it general, I'll use variable names instead of hard numbers, and then we can take a look at some specific examples, and you can apply the theory to your own needs.<br /><br />First, there's the stair option. Let's use <i>S</i> to denote the time it takes to walk. Since <i>S</i> doesn't really change much from trip to trip, we'll treat it as a constant. If you wanted to get more sophisticated, you could account for things like how many other people were trying to take the stairs, whether you were carrying something heavy, whether you could slide down the banister, etc.<br /><br />The elevator option is the more interesting one. Let's let <i>e</i> be the shortest possible time the elevator could take, say, if it were already there waiting for you and didn't make any other stops. Similarly, there's a maximum time the elevator could take, if it was the greatest possible number of floors away from you and someone had pushed all the buttons, or something. Denote that time by <i>E</i>. (Upper case for the bigger time; lower case for the smaller one.) Again, we're treating <i>e </i>and <i>E</i> as known quantities and as constants; I encourage you to measure them sometime. If <i>S </i> < <i> e</i>, it's always faster to take the stairs, no matter what. If <i>E </i> < <i> S</i>, it's faster to take the elevator, even in the worst case. The remaining possibility is that <i> e < S < E </i>, so sometimes one is better, sometimes the other. It sounds like that's the situation with your elevator, SM, so we'll take it as a given.<br /><br />Since we don't know the <i>actual</i> length of time the elevator would take, we have to treat it as a <i>random</i> quantity with a value somewhere between <i>e</i> and <i>E</i>. Let's call the actual time <i>T</i>. Here again, we need to consider what information we might have about <i>T</i> that could tell us what kind of <i>probability distribution </i>is reasonable to associate to it. For example, should we expect it to usually be closer to its minimum possible value, <i>e</i>? That would make sense if not many other people used that elevator and it usually hung out on the correct floor--say, if we were trying to go up from the ground floor to the 7th floor in an apartment building. On the other hand, if we were trying to go down from the top floor of a busy office building, it might be more reasonable to expect <i>T</i> to be closer to its maximum value, <i>E</i>, more of the time.<br /><br />In the absence of any other information, we'll assume the distribution of <i>T</i> is <a href="http://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29"><i>uniform</i></a> on the set of times between <i>e</i> and <i>E</i>, meaning it's just as likely to be any value as any other. Another way of saying this is to say that the probability that <i>T</i> is <i>less</i> than any given value, say <i>x</i> , is proportional to the difference between <i>x</i> and <i>e</i>. For <i>x</i> = <i>e</i>, the probability is 0, since <i>e</i> is the smallest possible time; for <i>x</i> = <i>E</i>, the greatest possible time, the probability is 1; for times in the middle, the probability is between 0 and 1. In pictures, the distribution looks like this (<span style="font-style: italic;">a</span> = <span style="font-style: italic;">e</span>, <span style="font-style: italic;">b</span> = <span style="font-style: italic;">E</span>):<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SdKL8dX-_4I/AAAAAAAAAJE/RJKIiimZoRs/s1600-h/800px-Uniform_distribution_PDF.png"><img style="cursor: pointer; width: 400px; height: 300px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SdKL8dX-_4I/AAAAAAAAAJE/RJKIiimZoRs/s400/800px-Uniform_distribution_PDF.png" alt="" id="BLOGGER_PHOTO_ID_5319467980671614850" border="0" /></a><br /><br />As a result, the <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-1.html">expected value</a>, or average, of <i>T</i> is the number halfway in between its minimum and maximum possible values, that is, (<i>e</i> + <i>E</i>)/2. So, in the sense of minimizing expected values, you should take the stairs only if <i>S</i> < (<i>e + E</i>)/2; otherwise, you're better off waiting for the elevator, on average. As in my discussion of <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-2.html">the lottery</a>, though, the expected value may not be the only consideration. Since the time to take the stairs is a known quantity, it has a variance of 0, and that security may be worth some trade off in expected value. On the other hand, maybe you like to gamble, SM (I don't know you that well), in which case you might prefer the thrill of betting on the higher-risk, higher-reward elevator, even if the average time is slightly greater. Over many trials, though, you'd save time choosing the option with the smallest expected value.<br /><br />Just to see how this would play out with actual numbers, I'll consider a scenario that I frequently encounter when taking the subway. (And you thought this was just about elevators!) Here, the role of "stairs" will be played by the #1 downtown <a href="http://www.mta.info/nyct/maps/submap.htm">local train</a>, which makes frequent stops every few blocks, and the "elevator" corresponds to the #2 downtown <a href="http://www.alaskarails.org/sf/film/runaway-train/rt5.jpg">express train</a>, making stops only every few stations. Let's assume that I'm already on the local train at the 96th Street station, and I'm trying to get to Times Square as quickly as possible. My options are 1) stay on the local train, which will get to Times Square after some fixed amount of time, or 2) gamble on the express train, which might get there earlier or might not.<br /><br />According to the <a href="http://www.mta.info/nyct/service/pdf/t1cur.pdf">schedule</a>, it takes the local train about 10 minutes to get from 96th Street to Times Square. And the <a href="http://www.mta.info/nyct/service/pdf/t2cur.pdf">express train</a> takes 6 minutes but only runs every 12 minutes (in the middle of the day). Therefore, the least possible time it could take is 6 minutes, and the greatest possible time is 18 minutes. Assuming the distribution of times to be uniform between these extremes gives us an average travel time of (6 + 18)/2 = 12 minutes, which is 2 minutes more than the local train. Hence, to minimize expected value, I should always stay on the #1. Here, the randomness of the travel time has more to do with how long I have to wait for the train than how long the trip will actually take, and it's somewhat reasonable to treat this as being uniformly distributed, since the train runs on a regular schedule (every 12 minutes) but <i>I don't know what time it is currently</i> relative to that schedule.<br /><br />Of course, in practice the situation is more complicated than that. For example, there are actually two possible express trains I can take (the #2 or the #3), just as there might be two possible elevators you could take, and I'll take whichever comes first. If we treated these both as uniformly random variables, independent of each other, then the distribution of the time until the <i>next </i>train would <span style="font-style: italic;">not</span> be uniform and would, in fact, have a smaller expected value.<br /><br />For the numbers I gave above, it actually works out that the #2/#3 express option has the <i>same</i> expected value, 10 minutes, as the local train. So it's back to personal preference to break the tie. Another interesting variant is to consider how many people you see waiting for the train, elevator, bus, etc. and use that as a way to estimate the amount of time they've already been waiting. For example, if you see 3 people waiting and you know people tend to show up at the rate of 2 per minute, then you could estimate the time already waited at 1.5 minutes, reducing the maximum possible travel time by that same amount and perhaps tipping the balance.<br /><br />Hope that helps some, and let me know whether the time you save turns out to be greater than the time spent thinking about the problem. I've got to go now to catch a (local) train.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com4tag:blogger.com,1999:blog-1920088135580776574.post-23454309763927267062009-03-26T21:34:00.000-07:002009-03-26T21:44:09.361-07:00An Average Screwball<i>Dear Dr. Math,<br />I've read that Derek Jeter had a lower batting average than David Justice in both 1995 and 1996, but if you combine the two years together Jeter's average is higher. How is that possible? I don't get it.<br />SoxFan04<br /><br /></i>Dear SoxFan,<br /><br />No other sport seems to bring out the statistics junkies quite like baseball, what with the ERAs and OBPs and WHIPs.* Where else do you find sports fans casually throwing around ratio approximations computed to <a href="http://en.wikipedia.org/wiki/Batting_average" target="_blank">3-digits of accuracy</a>? I guess it's all a holdover from cricket; we in the U.S. should at least count ourselves lucky that we don't have to learn things like the <a href="http://en.wikipedia.org/wiki/Duckworth-Lewis_method" target="_blank">Duckworth-Lewis method</a>.<br /><br />So, what you say is true about Jeter and Justice. In 1995, Jeter had a batting average (that's ratio of hits to at-bats for the baseball-averse) of .250, and Justice's average was slightly higher at .253. In 1996, Jeter hit a much more respectable .314 but Justice out-paced him again, hitting .321. However, when the results of both years are pooled together (total hits and total at-bats), Jeter's combined average is .310, versus Justice's paltry .270. How could this happen, in the most American of sports?<br /><br />It's a particular case of something called Simpson's paradox, which generally deals with the weird things that can happen when you try to combine averages over different sets of data. See, the source of the confusion is that Jeter and Justice had <i>different numbers of at-bats</i> in each of the two years, so the reported "averages" really aren't measuring their performances by the same standard. In 1995, Jeter only had 48 at-bats, whereas Justice had 411. In the following year, the numbers were almost reversed, with Jeter having a whopping 582 at bats to Justice's 140.<br /><br />To see why this matters, consider the following extreme example: Let's say that I got to act out my childhood fantasy and somehow got drafted into Major League Baseball in the year 1995, but I only got 1 at-bat. Imagine that by some miracle I managed to get a hit in that at-bat (let's go ahead and say it was the game-winning home run; it's my fantasy life, OK?). So my batting average for that season would be 1.000, the highest possible. No matter how good Derek Jeter was, he couldn't possibly beat that average. In fact, let's assume he had an otherwise incredible year and got 999 hits out of 1000 at-bats, for an average of .999. Still, though, my average is better. Now, as a result of my awesome performance in 1995, let's say the manager decided to let me have 100 at-bats the following year, but this time, I only managed to get 1 hit. So my average for the second year would be 1/100 = .010, probably the end of my career. Meanwhile, imagine that Jeter got injured for most of that year and only had 1 at-bat, during which he didn't get a hit. Thus, his average for the second season would be .000, the worst possible. Again, my average is higher. So on paper, it would appear that I was better. However, when you combine all the hits and at-bats, I only got 2 hits out of 101 attempts, for an average of 0.019, whereas Jeter actually had 999 hits out of 1001 at-bats, for an amazing average of .998. My two better seasons were merely the result of creative bookkeeping. The same thing could happen if we split up our averages against right-handed and left-handed pitchers, or at home and away, etc.<br /><br />This is part of the reason that baseball records-keepers require a minimum number of at-bats in a season for player's average to "count." Otherwise, some player could start his career with a hit and then promptly "retire" with an average of 1.000.<br /><br />The phenomenon isn't just limited to sports, either. Simpson's paradox rears its ugly head in all kinds of statistical analyses, like the ones in medical studies, for example. It can happen that treatment 1 appears to be more effective than treatment 2 in each of two subgroups of a population, but when you pool all the results together, treatment 2 is better. (For an example, replace "Major League Baseball," with "a pharmaceutical company," "at-bats" with "patients," "hits" with "successful treatments," "Derek Jeter" with "a rival drug company," and "childhood fantasy" with "adult nightmare" above.) Again, the key is that the sizes of the groups receiving each treatment have to be different from each other in order for the phenomenon to manifest. If the group sizes (or number of at-bats, or whatever) are held <i>constant,</i> the paradox disappears, because the higher averages would actually correspond to greater <i>numbers</i> <i>of successes</i> in each trial.<br /><br />As I discussed in my previous post about <a href="http://doctormath.blogspot.com/2009/02/what-mean-means.html">different kinds of averages</a>, an average is supposed to represent the quantity that if repeated would have the same overall effect as a set of quantities. However, that doesn't mean the average is the end of the story. Another essential component is how many times that quantity was repeated. If I told you I'd pay you an average of $100 per hour to do some work for me around the house, you'd probably be fairly disappointed if the "work" consisted of 2 seconds of opening a pickle jar (total cost to me: 5.5¢. look on your face: murderous rage.).<br /><br />-DrM<br /><br /><br />*Not to mention RISPs, DICEs, BABIPs, HBPs, <a href="http://en.wikipedia.org/wiki/Baseball_statistics">...</a>drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-38223881518025008572009-03-17T15:13:00.000-07:002009-03-18T04:57:51.316-07:00How Big is That Number? Special "March Madness" Episode 2^63.<i>Dear Dr. Math,<br />How many possible NCAA basketball tournament brackets are there? If I just guessed randomly, what are the odds I'd get it exactly right?<br />BYOBasketball<br /><br /></i>Ah yes. I love March Madness for many reasons:<br /><br />1) the <a href="http://en.wikipedia.org/wiki/Kansas_State_Wildcats" target="_blank">wide</a> <a href="http://en.wikipedia.org/wiki/Northern_Michigan_University" target="_blank">assortment</a> <a href="http://en.wikipedia.org/wiki/Weber_State_Wildcats#Athletics" target="_blank">of</a> <a href="http://en.wikipedia.org/wiki/Arizona_Wildcats" target="_blank">team</a> <a href="http://en.wikipedia.org/wiki/New_Hampshire_Wildcats" target="_blank">names</a> <a href="http://en.wikipedia.org/wiki/Davidson_Wildcats_men%27s_basketball" target="_blank">and</a> <a href="http://en.wikipedia.org/wiki/Villanova_Wildcats" target="_blank">mascots</a>,<br />2) Dick Vitale, and<br />3) it's the one time of the year when people care about the number 2^63, because that's the number of possible ways of filling out a tournament bracket prediction.<br /><br />Even <a href="http://thecaucus.blogs.nytimes.com/2008/03/20/obamas-ncaa-bracket/" target="_blank">then-future-President</a> Barack Obama couldn't resist giving it a try! Let's see how it works, baby!<br /><br />To begin with, the NCAA officials select 64 teams to play in the tournament (actually 65--there's a traditional preliminary game to determine the last team in the field, but we'll ignore that). They divide the teams up into 4 groups of 16 and then rank the teams within each group, from 1 to 16. Each group holds a single elimination tournament, with the high-ranking teams pairing off initially against the low-ranking teams, and the 4 winners (the Final Four) go on to play two more games to determine the national champion. At some point, the players cut down the nets with scissors for some reason. Predicting the way the entire tournament will unfold (that is, the "bracket") has become a popular office tradition; legend has it that no one has ever predicted the entire bracket exactly right, although some websites like Yahoo! have offered <a href="http://tournament.fantasysports.yahoo.com/t1" target="_blank">prizes of up to $1 million</a> if anyone does it. But if they charged even a penny to play, I wouldn't take that bet. Here's why:<br /><br />Each game played has 2 possibilities for a winner. Considering just the first round of games, of which there are 32, this means the number of possible ways of predicting the winners is 2^32, or 4,294,967,296. Assuming you had guessed the first round correctly, there would now be 16 games to determine the winner of, so the number of possible predictions for the second round would be 2^16 = 65,536. And so on. In the third round, there would be 2^8 = 256 possibilities; in the fourth round, 2^4 = 16; in the fifth round, 2^2 = 4, and in the final game, you'd still have to guess which of the 2 remaining teams would be the champion.<br /><br />Altogether then, the number of possible ways to fill out a bracket is 2^32 * 2^16 * 2^8 * 2^4 * 2^2 * 2 = 2^(32+16+8+4+2+1) = 2^63. Another way to arrive at this number is to consider the fact that in total, 63 of the 64 teams have to lose a game over the course of the tournament, meaning there are 63 total games played. So, by the same reasoning as before, the number of outcomes of these games is 2^63 = 9,223,372,036,854,775,808, about 9.2 * 10^18. In words, that's 9.2 quintillion, a.k.a. 9.2 million trillion.<br /><br />How big is that number?<br /><br />Well, if you were to write out each bracket prediction on a sheet of standard paper (thickness = .1mm), you would have a stack of paper (9.2 * 10^18 * .1 * .001) = 9.2 * 10^14 meters tall, which is about 6200 times farther than the distance from the Earth to the Sun. If you stole 1 paperclip (mass approx. 1 gram) for each sheet, you'd end up with 9.2 * 10^18 * 1 * .001 = 9.2 * 10^14 kilograms of paperclips, which is the equivalent weight of 46 billion <a href="http://hypertextbook.com/facts/2003/MichaelShmukler.shtml" target="_blank">blue whales</a>. Every person on Earth could have 7 blue whales' worth of paperclips. If, instead of working, you filled out 1 bracket per second, 24 hours a day, it would take you (9.2 * 10^18)/(60*60*24*365) = 29 million years. You'd be so busy filling out brackets that you wouldn't even notice that the tournament had ended millions of years ago, along with all of human civilization! (Probably.)<br /><br />Now, does that mean your odds of picking a perfect bracket are 9.2 quintillion-to-1? Hold on just a second there, shooter. That would only be correct if you were assuming that all possible brackets were equally likely to occur. However, with a little thought, we can probably improve on that assumption considerably. For example, in the 24 years since the tournament was expanded to include 64 teams, a #1 ranked team has never lost in the first round to a #16 seed. So, even if we take the ranking as the only meaningful predictor, we can say that some brackets, namely those in which the high-ranked teams always win, are more likely than others.<br /><br />Using data I found on <a href="http://www.hoopstournament.net/" target="_blank">www.hoopstournament.net</a>, I was able to estimate the probabilities of the higher-ranked team winning each game in the tournament based on the results of past years (remind me sometime to talk about estimating bias, will you?), and my guess at the probability of the "no upsets" bracket is 1.64 * 10^-11, so the odds against the occurrence of this bracket are "only" 1.64*10^11 = 164 billion-to-1. Since we're assuming that it's always more likely for the higher ranked team to win any given game, this bracket is our best hope. Those odds are still pretty long, though. For comparison, let's say your ATM code were reset to a random 4 digit number; you'd be about as likely to win the bracket challenge as you would be to <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-2.html" target="_blank">win the Powerball</a> and then guess the new PIN on the first try (presumably to deposit your winnings).<br /><br />Many of you sports fans out there are no doubt hurling expletives at the screen right about now, because, as you know, there are <i>always</i> some upsets. I mean, who could forget <a href="http://en.wikipedia.org/wiki/Ncaa_tournament_upsets" target="_blank">Coppin State over South Carolina in 1997</a>, right? There's a bit of an odd paradox here: even though the "no upsets" tournament prediction is the most likely (based on our assumptions), the <i>number of upsets</i> is more likely to be greater than 0 than otherwise. In fact, the most likely number of upsets over the course of the tournament, according to my predictions, is about 18. (Note: I'm including "upsets" like the #9 seed beating the #8 seed--an event that has happened about as often as it hasn't over the years.)<br /><br />So, the most likely scenario is that there are no upsets, and yet the most likely number of upsets is 18. How can both statements be true? The answer lies in the fact that we're not being asked to predict just the number of upsets, but the actual <i>sequence</i> of events:<br /><br />Consider the simpler example of rolling a 6-sided die. Let's assume all faces are equally likely, but let's say we really don't want any sixes, and that's all we care about. We'll label the six-side as "L" and the other 5 sides as "W." So, the chance of winning any particular roll is 5/6. Not bad. Now let's imagine rolling this die 60 times. Since the most likely occurrence in each roll is that we'll win, the most likely sequence of 60 rolls is a sequence of 60 wins, with probability (5/6)^60, about 1.77 * 10^-5, roughly 1 in 57,000. If we put a loss in the sequence in any location (first, second, third, etc.), the probability would go down to (5/6)^59 * (1/6) = 3.54 * 10^-6, which is 5 times less likely. However, according to basic probability, the number of losses would follow what's called a <a href="http://en.wikipedia.org/wiki/Binomial_distribution" target="_blank">binomial distribution</a>, which has its peak (the most likely number) at the value equal to the number of rolls times the probability of losing each one. In our setup, that's 60*(1/6) = 10. If we repeated this experiment a bunch of times, the most common result would be that we'd lose 10 times out of 60.<br /><br />What's going on here is that although the chance of <i>any particular sequence</i> of dice-rolls containing a loss is <i>less</i> than the perfect "all wins" sequence, there are so many more <i>possible sequences</i> with a loss--in fact, there are 60, versus only the 1 perfect sequence--that their probabilities <i>accumulate</i> to make the chance of having a loss <i>somewhere</i> greater than not having one anywhere. And continuing in this manner, you can see that the number of sequences containing 2 losses is even greater, despite the fact that each one is even less likely, and so on. Eventually, the two effects balance each other and the combined probability maxes out at 10 losses.<br /><br />But that's not what we're being asked to predict! In order to win Yahoo!'s bracket tournament or just your office's measly little pool with a perfect bracket, you'd have to predict the precise sequence of wins and losses. People resist picking the most likely bracket because it doesn't "look random enough,"* but really they're just reducing their chances by several orders of magnitude.<br /><br />In a related experiment, if a person is presented with a flashing light that can be either red or green but is red twice as often, and the subject is asked to predict the color of the next flash, most people will "mix it up," mostly guessing red but occasionally guessing green, because they feel that their predictions need to match the "pattern" of occasionally being green. They're correct to think that it's likely that the light will sometimes be green; their mistake is assuming that they can improve their odds by guessing any particular realization of that notion. As a result, they generally do <i>worse</i> in the experiment than lab rats, who will, reasonably enough, just keep betting on red and winning 2/3 of the time.<br /><br />And that's why I'm going with the rat strategy: Louisville, Connecticut, Pittsburgh, and North Carolina in the Final Four, baby!**<br /><br />-DrM<br /><br /><br />*Or perhaps they feel that they have more information about a given game than the rankings would indicate. For example, a star player could have been recently injured, or a particular team's playing style might be well-suited to beating an otherwise better team. That being said, I think it's more common for people to just pick the teams they <i>want</i> to win and find some way to justify that prediction.<br /><br />**Personal note: Once, as a nerdy kid who knew nothing about basketball, I employed this strategy in a middle school bracket challenge, picking all four #1 seeds to advance. And as it happened, I got 3 out of 4 right, more than any of my classmates, because they were all clinging to their inferior human-brained approaches. Take that, millions of years of evolution!drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com7tag:blogger.com,1999:blog-1920088135580776574.post-78898618482341934942009-03-13T14:34:00.000-07:002009-03-17T20:42:59.256-07:00You Can't Handle the Lack of Falsehood!Short Round is <a href="http://www.alt85.com/2009/03/words-words-words.html">at it again</a>, calling me out to challenge me to a good old-fashioned Math-Off.* He's laid down his best moves, throwing out verbal accusations about his old grandpappy and comparing me to Joaquin Phoenix (?), and now it's time for me to step up and put him in his place. The subject at hand was the word "autological":<br /><br /><div><div><i>Q. What does </i><i>autological mean?</i></div><div><i>A. Huh, that's sort of a random question. A word is autological if it has the property it denotes. For example, </i><i>polysyllabic, being itself polysyllabic, is autological because it can be used to describe itself. <a href="http://www.segerman.org/autological.html" target="_blank">Other examples</a> include, </i><i>unhyphenated, </i><i>pronounceable, </i><i>abstract, </i><i>nonpalindromic, and </i><i>adjectival. The opposite is </i><i>heterological: the word </i><i>monosyllabic is polysyllabic and therefore not autological, but/therefore it is heterological. (Most words are going to be heterological: like </i><i>hairy and </i><i>well-dressed.) Got it?</i></div><div><div><i>Q. Yeah, actually I knew that.</i></div><div><i>A. What, are you testing me?</i></div> <i>Q. No, I have a question about the word and wanted to sort of, you know, set it up first.</i></div></div><div><i>A. Is the question whether </i><i>autological is autological?</i></div><div><i>Q. Why, yes! That's exactly what I was going to ask.</i></div> <div><i>A. Well, let's take a look. First let's see whether </i><i>heterological is heterological. That question is basically a restatement of <a href="http://en.wikipedia.org/wiki/Russell%27s_paradox" target="_blank">Russell's Paradox</a> and therefore maybe more the domain of Dr. Math, but as my old grandpappy used to say, "F__k Dr. Math!"<br /><br /></i>We'll see about that!**<br /><br />So, Shorty, you're right about the connection to Russell's Paradox. In fact, it's this very kind of question that almost brought all of math tumbling down in the early part of the 20th century, although it seems to have mostly recovered now. The real issue at stake here is the idea of <i>sets</i>, and what possible sets you're allowed to construct. When you get down to it, math is really built on a foundation of sets; numbers themselves are just sets in disguise--for example, 0 is actually the empty set, 1 is the set containing the empty set as an element, and so on. But the word "set" is never formally defined, and can't be. If you think about it, this is the only possible way to avoid any circular logic; if "set" were defined in terms of some other words, then those words would themselves need definitions, etc., until we eventually either had an infinite number of words or something was defined in terms of "sets" again. In English this is OK, because we can eventually just point to things and say "I mean <i>that</i>!" but in math we don't have that luxury. You just try pointing to a set sometime.<br /><br />Many people used to take the point of view that you could always construct a set by specifying the <i>properties</i> of its elements. So, for example, I could say, "Let <i>A</i> be the set of all whole numbers between 1 and 10," or "Let <i>B</i> be the set of all people with moustaches," and those sets would make sense. However, the problem occurs when you start trying to construct sets with <i>self-referential </i>properties; for example, you might like the set <i>itself</i> to be a member of itself. Some sets, like the sets <i>A</i> and <i>B</i> above, are clearly not members of themselves (a set can't grow a moustache), but maybe there are sets that could be, like the "set of all things that aren't made of glass." Unless sets are made of glass (which they aren't), that set would be a member of itself. So far, there's no real paradox, just some weird-looking sets. However, if you now construct "the set of all sets that are not members of themselves," we have an honest-to-Gödel paradox on our hands: Suppose the set exists, and call the set <i>S</i>. The question is, Is <i>S</i> is a member of itself? If it is, then it cannot be, because it fails the test for membership; therefore, it's not, so it is. And it goes on like this, <i>ad infinitum</i>.<br /><br />If all of this seems like abstract nonsense to you (you're probably right, but) consider the following alternate formulations of the paradox and similar paradoxes:<br /><br />--Imagine that all the librarians in the country are asked to compile lists of the books in their respective libraries. Since these lists are written in book-form and are kept in the library, some librarians include the catalogue among their lists of books; others don't. Now, imagine that all the librarians send the catalogues to a central library in Washington, where the head librarian makes a master catalogue of all those catalogues which don't include themselves. Should the master catalogue include itself?<br />--For each number, we can write out statements describing the number and find the statement that uses the fewest syllables. Now, let <i>n</i> be the number described by the statement, "the least number not describable using fewer than twenty syllables." Since this description only contains 19 syllables, <i>n </i>is describable by fewer than twenty syllables, and thus can't be equal to <i>n</i>.<br />--The barber of Seville, who is a man and is shaven, shaves all men in town who do not shave themselves, and only those men. Who shaves the barber? It can't be someone else, because then he wouldn't shave himself, so he would have to shave himself, but it can't be him, either.<br />--Groucho Marx once said he wouldn't want to join any club that would have him as a member. Fine. But what if he joined <i>every</i> club that <i>wouldn't</i> have him? Would he be a member of Texarkana Country Club? If yes, then no; if no, then yes.<br />--Your question about whether the word "heterological" is heterological, meaning not possessing the quality it denotes, can be rephrased in terms of sets as follows: Let <i>W </i>be the set of all words which do not possess the quality they denote; a word is heterological if it is a member of <i>W</i>. So the question is, then, does <i>W</i> contain the word meaning "is a member of <i>W</i>"? If so, then that word doesn't belong in <i>W</i>, therefore it does belong, etc.<br /><br />In any event, "Uh-oh" is the correct answer.<br /><br />A related paradox is the so-called "liar's paradox," also called the <a href="http://en.wikipedia.org/wiki/Epimenides_paradox" target="_blank">Epimenides paradox</a>, after the famous stand-up comedian who popularized it around 600 B.C. The joke goes like this: A Cretan walks into a bar and says "All Cretans are liars!" Rim-shot. If the guy's telling the truth, then he's lying, and <span style="font-style: italic;">vice versa</span>. A fun variant that I like to use at parties (courtesy of Raymond Smullyan) is this: Put a dollar bill in one of two envelopes. On the first envelope, write<br /><br />1) "The sentences on both envelopes are false."<br /><br />On the second envelope, write<br /><br />2) "The dollar bill is in the other envelope."<br /><br />The first claim can't be true, by the same reasoning as the liar's paradox (if it were true, then it would be false, etc.). So therefore, it must be false, right? That means that one of the two statements is true, but, again, it can't be the first one, so it must be the second. Thus, the dollar bill is in the first envelope. Imagine your guests' surprise when they open the first envelope and find it empty (because you put the money in the other envelope!)! Oh, they laugh and laugh.<br /><br />Another related paradox is called "Newcomb's paradox," which goes like this: Suppose you're going on a gameshow, and you'll be presented with 2 boxes, box A and box B. Your choices are:<br /><br />1) take box A and box B<br />2) take box B only<br /><br />You are told that box A contains $1000, and that box B either contains $0 or $1 million. Seems like an obvious choice, right? Either way, you stand to gain by taking both boxes. Here's the catch, though: the host tells you that somehow (either by clairvoyance, psychological observation, or computer simulation) the producers of the show knew <i>ahead of time</i> what decision you were going to make, and changed the contents of box B as a result. If you were going to just take B, they put the million inside; if you were going to take A and B, they put nothing inside B, so you'll just get $1000. So, now what? Assuming you believe their claim, how do you act?<br /><br />Now, I wouldn't be writing all this if there weren't a way out of this mess. I'm not ready to give up on math just yet. Just to warn you, though, the resolution can be scarier than the paradoxes. Ready, then? Here goes: the essential answer to all of these puzzles is "You CAN'T!"<br /><br />That is, the mathematical wisdom that we now hold to, according to the hallowed <a href="http://en.wikipedia.org/wiki/Zermelo-Fraenkel_set_theory">Zermelo-Fraenkel</a> axioms (sort of the owner's manual of math), is that there are some sets you just <i>can't construct</i>. So, for example, the set of all sets which don't contain themselves just isn't a set. It's nothing. Even though it's described in language <i>reminiscent</i> of sets, it does not exist. The librarian <i>can't</i> make a list of all catalogues not referencing themselves; there's <i>no such thing</i> as the least number not describable in fewer than twenty syllables; the barber <i>can't</i> shave everyone who doesn't shave himself; Groucho <i>can't</i> join every club that won't allow him; and it's <i>impossible</i> to consider all words which don't refer to themselves. The producers of The Newcomb's Paradox Show <i>can't</i> (reliably) claim to have predicted your actions, because whatever simulation they had of your brain could not have included the fact that you <i>knew </i>that they had simulated it! Or if it did, then it didn't include the fact that you knew that it knew that you knew... Again, to put things in the language of set theory, the set of things they claim to know about you <span style="font-style: italic;">cannot contain</span> <i>itself</i>.<br /><br />A consequence of all of this is that there are some logical statements that are <i>neither true nor false</i>. For example, the statement "the word 'heterological' is heterological" is neither true nor false, because assigning it either <a href="http://en.wikipedia.org/wiki/Truth_value">truth value</a> (True or False) would be inconsistent. Similarly, the statement "the word 'autological' is autological" is neither true nor false, because <i>either</i> truth value would make it consistent with the axioms. If it were true, then it would be true; if it were false, then it would be false. Starting from the axioms of set theory, we could never hope to prove either statement. So in any logical sense, they are meaningless. And this, by the way, resolves the envelope problem as well, since the flaw in our deduction above was the assumption that statement 1 had to be either true or false.<br /><i><br /></i>This seems to fly in the face of everything we've been taught about truth and falsehood--namely, that <a href="http://en.wikipedia.org/wiki/Principle_of_bivalence">statements are always either true or false</a>. But consider for a moment, What if <i>that</i> statement is neither true nor false? Whoa. </div><br />In fact, we're not out of the woods yet, because, as Gödel showed in 1931, the consistency of the axioms <span style="font-style: italic;">themselves</span> is not provable from the axioms. So it might happen that two perfectly logical deductions could lead to inconsistent conclusions--we can't prove that it's impossible. Later, he died of starvation because he was convinced that someone was trying to poison him. So, perhaps the best advice I can give is just not to think about these things too much.<br /><br />As Tom Cruise said in <i><a href="http://www.imdb.com/title/tt0104257/quotes">A Few Good Men</a></i>, "It doesn't matter what I <i>believe</i> [to be consistent with the Zermelo-Fraenkel axioms], it only matters what I can <i>prove</i> [as a consequence of the Zermelo-Fraenkel axioms]!"<br /><br />-DrM<br /><br /><br />*A tradition dating back to ancient Greece, when Plato used to take on all challengers with his legendary <a href="http://plato-dialogues.org/faq/faq009.htm">taunt</a>, "αγεωμετρητος μηδεις εισιτω."<br /><br />**Listen, what happened between me and your grandfather (and grandmother) happened a long time ago and is really none of your business.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com6tag:blogger.com,1999:blog-1920088135580776574.post-38641681245193090022009-03-08T22:29:00.000-07:002009-03-10T11:45:04.625-07:00Prob(Rain and not(Pour))=0<i>Dear Doctor Math,<br />What does 40% chance of precipitation mean?<br />Dr. Anonymous, Ph.D., J.D.<br /></i><br />Dear Dr. Anonymous,<br /><br />This is a tricky one, and it confuses a lot of people. First, some wrong answers, courtesy of the Internet:<br /><br />1) "It will be expected to rain, but the significant chance of rain occurs in only 40 percent of the studied geographical area."<br /><br />2) "It will definitely rain, but only for 40% of the day."<br /><br />3) "4 out of 10 meteorologists think it will rain."<br /><br />4) "40% or below means it won't rain; 70% and above means it will."<br /><br />5) "There is a 40% chance of precipitation somewhere in the forecast area at some point during the day."<br /><br />Nope; nope; nope; definitely not; almost, but no. Who would have thought there could be so many different incorrect ways to interpret such a simple statement? We can dismiss numbers 3 and 4 right away--meteorologists aren't generally in the business of taking surveys of each other, and they wouldn't bother with the whole percentage thing if they had a definite opinion either way. The others are tempting, but none get it exactly right. Don't fret if you found yourself agreeing with one of them, though; according to <a href="http://www.mpib-berlin.mpg.de/en/mitarbeiter/gigerenzer/pdfs/RainFinal.pdf">this study</a> in the journal <i>Risk Analysis</i>, respondents from five major cities all over the world overwhelmingly preferred numbers 1, 2 and 5 to the correct answer, which, according to <a href="http://www.srh.noaa.gov/ffc/html/pop.shtml">The National Weather Service</a> and the <a href="http://www.weatheroffice.gc.ca/mainmenu/faq_e.html#weather8">Canadian Weather Office</a>, is this:<br /><br />"The probability of precipitation (POP) is the chance that measurable precipitation (0.2 mm of rain or 0.2 cm of snow) will fall <span class="bold">on any point</span> of the forecast region during the forecast period."<br /><br />In your example, a 40% chance of rain means the forecaster has determined that the probability is 40% that you will get rained on at some point during the forecast period, usually a day in length (some forecasts include hour-to-hour predictions, but the same idea applies--either way, interpretation 2 is out). But even that's kind of a circular definition--a 40% chance means the chance is 40%--and it doesn't explain where the numbers come from in the first place. To unpack things a little more, we should talk about how weather prediction works and why uncertainty is necessarily involved.<br /><br />To begin with, meteorologists are always collecting data, tons and tons of it, using extremely sensitive instruments both on the ground and in the upper atmosphere, via weather balloons. These instruments measure a whole array of weather conditions including temperature, humidity, wind speed and direction, pressure, dew point (whatever that is), and others. Then, the meteorologists feed all of that information into huge computer simulations, which use a combination of historical data and physical models (for example, equations of fluid dynamics) to predict the course of events over the next few days. Presumably, the models include the movements of butterflies, because, as I understand it, they are the source of basically all weather phenomena. Why the uncertainty, then? Will all that super technology, why can't they just predict the future?<br /><br />Well, as I discussed in my <a href="http://doctormath.blogspot.com/2009/02/lets-make-deal-or-no-deal.html">previous post</a> about the Monty Hall problem, there are many complex systems in the world which perhaps <i>could</i> be described exactly by massive systems of equations but which we as humans lack the computational <i>power</i> to fully comprehend. And weather is right up there with the most complex systems around. Part of what makes it so difficult to predict is its susceptibility to <i>non-linear feedback</i>, meaning the evolution of the system depends very sensitively on its current state. As a result, even minuscule differences in initial conditions will, with enough time, accumulate into larger differences and eventually produce radically big differences in the outcome. To give you some perspective, scientists have <a href="http://www.aip.org/history/climate/chaos.htm#M_10_">known</a> since the 1960s that even a difference of 0.000001 in initial measurements could lead to vastly different weather conditions after a period of only a week. After a few weeks, you can basically forget about it--in order to predict the weather in any meaningful way, you'd need a complete understanding of the position of every atom in the universe. In the lingo of the 1990s, weather systems are a classic example of <i>chaos</i>, due to their sensitive dependence on initial conditions. Jeff Goldblum explained it all before almost being eaten by a T-Rex.<br /><br />What makes this sensitive-dependence business such a bummer is that even the most expensive weather-measurement instruments necessarily have some amount of <i>error</i>. For example, a device that measures humidity might detect the presence of water, maybe even down to the last molecule, but that doesn't mean that other patches of nearby air have that same water content. The instrument is kind of like a <a href="http://doctormath.blogspot.com/2008/10/poll-to-poll.html">pollster</a>, asking the air how it's planning to "vote" in the upcoming "election," but it can't "poll" everyone, because there are something like <a href="http://en.wikipedia.org/wiki/Ideal_gas_equation">10^24</a> "registered voters" in every cubic foot of air. And let's face it, some of them are just "crazy." Failing to account for even a small number of these leads to enough initial uncertainty that prediction becomes more or less impossible. One of the more disturbing things I came across in the course of researching this post was the <a href="http://www.theweatherprediction.com/issues/6/">advice to meteorologists</a> that they should speak in certainties, because "People need to plan for what they need to do. People do not like to be unsure of what will happen." Well, I'm sorry, people, but that's just life.<br /><br />The output of all these models, then, is just an estimate of the <i>plausibility</i> of the occurrence of rain, just as the statement "the chance of rolling a fair die and getting a 3 is 1/6" is an estimate of that event's plausibility. With enough identical trials, one could perhaps judge whether the estimate had been correct, since the frequency of occurrence should converge to its probability (according to the <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-1.html">Law of Large Numbers</a>). However, that whole idea is kind of inapplicable here, because it's impossible to observe multiple independent instances of the same exact weather conditions for the same place at the same time. What would it even mean to say, "Out of 100 instances when conditions were exactly like this in New York on March 8, 2009, it rained in about 40"?* One of the persistent fallacies regarding weather prediction is that it is frequently "wrong." But how can an estimate of uncertainty be wrong? Even unlikely events do occasionally occur--consider 10 flips of a coin; any 10-flip sequence has probability 1/1024 of occurrence, but one of them has to happen. It doesn't mean the probability estimate was wrong. Bottom line, I'll take the National Weather Service over my uncle's "trick knee" any day.<br /><br />So, the complexity of weather and the imprecision of measurement is one source of uncertainty, but there's actually another one: the weatherman/weatherwoman doesn't know <i>where</i> <i>you are </i>in his/her forecast area. See, the weather forecast covers a fairly large area (like a whole zipcode), and inside that area there are lots of measurement stations, each of which could detect precipitation during the day. Even <i>if </i>a meteorologist knew <i>for a fact</i> that it was going to rain in exactly 40% of the area (and even knew where, too), he/she <i>still</i> would tell you the chance of it raining on <i>you</i> was 40%, since as far as he/she knows, you could be anywhere in town. In a way, interpretation 1 above is a <i>possibility</i>, although it's not by any means the only one. For example, on the other extreme, it could be that the weatherperson thinks that there's a 40% chance it's going to rain <i>everywhere</i> and a 60% chance it's not going to rain <i>anywhere</i>, in which case it doesn't matter at all where you are--your chance of getting rained on is 40%. I can't climb inside Al Roker's head (unfortunately) nor do I have access to his computer models, but most likely, I think the truth is some mixture of these ideas--the 40% figure accounts for both the probability of rain at all <i>and</i> the distribution of rain if it occurs.<br /><br />This, by the way, is what's wrong with interpretation 5 above, in case you were wondering; there's a subtle difference between saying "the probability of rain at any given point" versus "the probability of rain at some point," assuming that it's possible that it could rain in some places and not rain in others. Think of it this way: if I randomly chose to send an Applebee's gift card to one of my parents for Purim, the probability of any <i>given</i> parent (either Mom Math or Dad Math) getting the card would be 50%, but the probability of <i>some</i> parent getting the card would be 100% (I'm definitely sending it to someone). Another way to phrase it would be to say that if I picked a parent at random after giving out the card, the chance that I would pick the person with the card would be 50%.<br /><br />In the end then, a more precise definition of what it means to say, "There's a 40% chance of rain" would be something like:<br /><br />"Given all available meteorological/<a href="http://en.wikipedia.org/wiki/Lepidoptery">lepidopteric</a> information, subject to measurement error and uncertainty, I estimate the chance of a location selected uniformly at random from the forecast area receiving any measurable precipitation during the day at 40%."<br /><br />Maybe a little too wordy for morning drive-time, though.<br /><br />-DrM<br /><br />*Side note: it did actually rain today, and Dr. Math chose not to bring an umbrella despite a forecast of "80% chance of rain." Sometime you have to roll <a href="http://en.wikipedia.org/wiki/Craps#Multi_roll_bets">the hard 6</a>.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-44931099332666435842009-03-03T18:44:00.000-08:002009-03-08T19:46:46.198-07:00Wholly Whexagons, continued<a href="http://doctormath.blogspot.com/2009/02/wholly-whexagons.html">Last time</a>, on Ask Doctor Math:<br /><br />"... it's true that hexagons do make an appearance in a large variety of different contexts."<br /><br />"Primarily, I'm referring to <i>regular</i> hexagons--hexagons with 6 equal sides..."<br /><br />"... by putting a bunch of identical hexagons together as tiles, we can cover an entire plane surface."<br /><br />"... this way of packing circles has the property of being <i>optimal</i>..."<br /><br />"... hexagon madness..."<br /><br /><br />And now, the conclusion!<br /><br />While regular hexagons have all those great qualities that make them perfect for the needs of bees and frat-boys alike, it happens that irregular (that is, not necessarily equal-sided) hexagons have a somewhat amazing property, too. That is, in some sense they're the <i>average</i> <i>random</i> <span style="font-style: italic;">polygon</span>. What I mean here is that if we generate a random sectioning of a flat surface into a large number of polygonal shapes, the average number of sides per polygon will be about 6. (Side note: this, <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-3.html">of course</a>, doesn't mean that there actually <span style="font-style: italic;">are</span> any hexagons; it could be that the shapes are composed of equal parts squares and octagons, say, but in practice, this is unlikely.)<br /><br />First, we should be clear on some terminology: a "polygon," from the Greek "poly" = "many" and "gon" = "side," is a collection of points connected together by line segments that enclose an area of the plane. The points are called "vertices" (singular "vertex," not "vertice"!), and the segments are called "sides" of the polygon. Notice that it's always true that polygons have the same number of sides as vertices. When we put some polygons together in the plane, their sides are now called "edges," except if two sides overlap each other, we only count that as one edge. Also, when vertices overlap each other, we only count that as one vertex. For example:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SbCcuO7NfWI/AAAAAAAAAHU/VQc37ms_W60/s1600-h/polys1.JPG"><img style="cursor: pointer; width: 360px; height: 329px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SbCcuO7NfWI/AAAAAAAAAHU/VQc37ms_W60/s400/polys1.JPG" alt="" id="BLOGGER_PHOTO_ID_5309916278764174690" border="0" /></a><br />Now, before I demonstrate that hexagons are the average, I'll need to lay out a couple of house rules for generating these random polygons:<span style="font-weight: bold;"><br /><br />Rule 1.</span> All the vertices of the shapes line up with each other, so no brick-patterns, like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_zy58RlQeD60/SajL3te608I/AAAAAAAAAG8/vCrTqFmCvH8/s1600-h/Wallpaper_group-cmm-1.jpg"><img style="cursor: pointer; width: 272px; height: 272px;" src="http://3.bp.blogspot.com/_zy58RlQeD60/SajL3te608I/AAAAAAAAAG8/vCrTqFmCvH8/s320/Wallpaper_group-cmm-1.jpg" alt="" id="BLOGGER_PHOTO_ID_5307716318818653122" border="0" /></a><br /><br />This is disallowed because it has a vertex of one brick in the middle of the side of another brick.<br /><br /><span style="font-weight: bold;">Rule 2.</span> Every vertex is the junction of exactly 3 polygons (except for a few on the boundary, which we'll ignore).<br /><br />Now, why are these reasonable? Well, the most common setting for this kind of random polygon-generation is the formation of cracks in some surface, like mud or peanut brittle:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SbCdshKEzbI/AAAAAAAAAHc/-oV94jlkHdA/s1600-h/Giants_causeway_closeup.jpg"><img style="cursor: pointer; width: 256px; height: 192px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SbCdshKEzbI/AAAAAAAAAHc/-oV94jlkHdA/s400/Giants_causeway_closeup.jpg" alt="" id="BLOGGER_PHOTO_ID_5309917348810247602" border="0" /></a><br /><br />(Note: we're approximating a crack here as a straight line, which takes some imagination.)<br /><br />Rule 1 essentially states that no cracks spontaneously form in the middle of already existing sides; instead, they emanate from junctions between existing cracks. This is reasonable because a junction between cracks is likely to be a weaker point in the surface than any point along an existing crack. Another way to think about it is that cracks occasionally <span style="font-style: italic;">split</span> into more than one crack, but when they do, both cracks generally change direction, instead of one continuing on like nothing had happened. In pictures, this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SbCfUGTLsWI/AAAAAAAAAHk/L0vcL0fDqTQ/s1600-h/polys2.JPG"><img style="cursor: pointer; width: 261px; height: 193px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SbCfUGTLsWI/AAAAAAAAAHk/L0vcL0fDqTQ/s400/polys2.JPG" alt="" id="BLOGGER_PHOTO_ID_5309919128307085666" border="0" /></a><br /><br />is much more likely than this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_zy58RlQeD60/SbCfcaMjj1I/AAAAAAAAAHs/R0lPUnz_jGc/s1600-h/polys3.JPG"><img style="cursor: pointer; width: 334px; height: 161px;" src="http://3.bp.blogspot.com/_zy58RlQeD60/SbCfcaMjj1I/AAAAAAAAAHs/R0lPUnz_jGc/s400/polys3.JPG" alt="" id="BLOGGER_PHOTO_ID_5309919271086952274" border="0" /></a><br /><br />Similarly, rule 2 states that cracks tend to only split off into pairs. To see why that's reasonable, imagine if a crack were trying to split into 3 cracks. If any one of the three were just the slightest bit late to form, it would end up splitting off from one of the already existing 2 cracks, instead of the original one. In pictures, again, this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_zy58RlQeD60/SbCguH1c_9I/AAAAAAAAAH0/Qx_u5VWhiu8/s1600-h/polys4.JPG"><img style="cursor: pointer; width: 237px; height: 168px;" src="http://3.bp.blogspot.com/_zy58RlQeD60/SbCguH1c_9I/AAAAAAAAAH0/Qx_u5VWhiu8/s400/polys4.JPG" alt="" id="BLOGGER_PHOTO_ID_5309920674907488210" border="0" /></a><br />is much more likely to occur naturally than this:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SbCgxKCONZI/AAAAAAAAAH8/lX4CnDIg9sk/s1600-h/polys5.JPG"><img style="cursor: pointer; width: 246px; height: 171px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SbCgxKCONZI/AAAAAAAAAH8/lX4CnDIg9sk/s400/polys5.JPG" alt="" id="BLOGGER_PHOTO_ID_5309920727037523346" border="0" /></a><br /><br />There is actually physics that could back me up here, but for now we'll just take these as givens.<br /><br />It turns out that these same rules make sense in other settings, as well--for example(s), the formation of soap bubbles:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SbCmc8PLhNI/AAAAAAAAAIU/4FzxoJiy3dg/s1600-h/Soapbubbles1b.jpg"><img style="cursor: pointer; width: 400px; height: 245px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SbCmc8PLhNI/AAAAAAAAAIU/4FzxoJiy3dg/s400/Soapbubbles1b.jpg" alt="" id="BLOGGER_PHOTO_ID_5309926976806159570" border="0" /></a><br /><br />the shape of storm clouds, like this one on Saturn:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SbCmvy3rzoI/AAAAAAAAAIc/67mt7RVWdRM/s1600-h/688px-Saturn_hexagonal_north_pole_feature.jpg"><img style="cursor: pointer; width: 400px; height: 348px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SbCmvy3rzoI/AAAAAAAAAIc/67mt7RVWdRM/s400/688px-Saturn_hexagonal_north_pole_feature.jpg" alt="" id="BLOGGER_PHOTO_ID_5309927300709207682" border="0" /></a><br /><br />the sections on a tortoise shell:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SbCm11Yk8aI/AAAAAAAAAIk/H2xJ52YrSfg/s1600-h/400px-Carapax.svg.png"><img style="cursor: pointer; width: 357px; height: 400px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SbCm11Yk8aI/AAAAAAAAAIk/H2xJ52YrSfg/s400/400px-Carapax.svg.png" alt="" id="BLOGGER_PHOTO_ID_5309927404463255970" border="0" /></a><br />and even France!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SbCsG2mf31I/AAAAAAAAAI0/w2OlVD2i5z4/s1600-h/France.jpg"><img style="cursor: pointer; width: 380px; height: 400px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SbCsG2mf31I/AAAAAAAAAI0/w2OlVD2i5z4/s400/France.jpg" alt="" id="BLOGGER_PHOTO_ID_5309933194405994322" border="0" /></a><br /><br />Can you spot the hexagons?<br /><br />OK, before we get to crack apart soapy French turtles on Saturn, we need to take a little side trip to talk about a fundamental fact about polygons in the plane, called <a href="http://en.wikipedia.org/wiki/Euler_characteristic" target="_blank">Euler's formula</a>.<br /><br />Euler's formula says that, subject to the rules above, no matter how we arrange a collection of polygons in the plane, the number of <i>vertices</i> minus the number of <i>edges</i> plus the number of <span style="font-style: italic;">polygons</span> (also called <span style="font-style: italic;">faces</span>) is a constant. For our purposes, the constant is 1.* In equation form,<br /><br /><span style="font-style: italic;">V</span> - <span style="font-style: italic;">E</span> + <span style="font-style: italic;">F</span> = 1<br /><br />It actually makes quite a lot of sense if you think about it for a second. Imagine we only had one shape, a lonely little triangle all alone in the plane. So, <span style="font-style: italic;">V</span>, the number of vertices, would be 3, as would the number of edges, <span style="font-style: italic;">E</span>. And <span style="font-style: italic;">F</span>, the number of faces, in this case a frowny face, would be 1:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SbCjydKYX6I/AAAAAAAAAIE/acWnhn8EIEQ/s1600-h/polys6.JPG"><img style="cursor: pointer; width: 188px; height: 212px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SbCjydKYX6I/AAAAAAAAAIE/acWnhn8EIEQ/s400/polys6.JPG" alt="" id="BLOGGER_PHOTO_ID_5309924047886770082" border="0" /></a><br />Hence,<br /><br /><span style="font-style: italic;">V</span> - <span style="font-style: italic;">E</span> + <span style="font-style: italic;">F</span> = 3 - 3 + 1 = 1<br /><br />Now, if we tacked on a friend for the triangle, for example a square, the resultant shape would have 5 total vertices, 6 total edges, and 2 (now smiley) faces:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SbCj07aOLfI/AAAAAAAAAIM/7ZSvrSAsreg/s1600-h/polys7.JPG"><img style="cursor: pointer; width: 333px; height: 264px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SbCj07aOLfI/AAAAAAAAAIM/7ZSvrSAsreg/s400/polys7.JPG" alt="" id="BLOGGER_PHOTO_ID_5309924090366012914" border="0" /></a><br />So again, <span style="font-style: italic;">V</span> - <span style="font-style: italic;">E</span> + <span style="font-style: italic;">F</span> = 5 - 6 + 2 = 1. Essentially, the square gobbled up 2 of the triangle's vertices and 1 of its edges, while adding 4 of both edges and vertices and 1 extra face. As a result, the quantity <span style="font-style: italic;">V - E + F</span> stayed constant. By similar reasoning, you could convince yourself that the same would happen no matter what shape we tacked on. And we can now repeat the process by adding a third polygon, and a fourth, and so on, until we have a whole polygon party on our hands.<br /><br />OK now, at long last, we come back to random shapes. In our random polygonal mix, let's call <span style="font-style: italic;">S</span> the total number of <i>sides</i> the polygons have altogether (different from <i>E</i> because 2 sides overlap to form 1 edge). Polygons individually always have the same number of vertices and sides, and since each vertex is shared by 3 polygons, the net total number of vertices is <span style="font-style: italic;">S</span>/3. Also, each edge is shared by 2 sides (except for a small number of boundary edges), so <span style="font-style: italic;">E</span>, the total number of edges, is the same as <span style="font-style: italic;">S</span>/2. Putting these into Euler's formula gives us:<br /><br /><span style="font-style: italic;">S</span>/3 - <span style="font-style: italic;">S</span>/2 + <span style="font-style: italic;">F</span> = 1<br /><br />Equivalently, <span style="font-style: italic;">F</span> = 1 + <span style="font-style: italic;">S</span>/2 - <span style="font-style: italic;">S</span>/3.<br /><br />We can combine the <span style="font-style: italic;">S</span>/2 - <span style="font-style: italic;">S</span>/3 to get <span style="font-style: italic;">S</span>/6, so we have:<br /><br /><span style="font-style: italic;">F</span> = 1 + <span style="font-style: italic;">S</span>/6<br /><br />Multiplying by 6 and dividing by F gives us:<br /><br />6 = 1/<span style="font-style: italic;">F</span> + <span style="font-style: italic;">S</span>/<span style="font-style: italic;">F</span><br /><br />Now if we imagine the number of faces being very large, this tells us that 1/<span style="font-style: italic;">F </span>is very small, so <span style="font-style: italic;">S</span>/<span style="font-style: italic;">F</span> is very close to 6. In other words, the ratio of sides to polygons is about 6, so the average polygon is a hexagon!<br /><br />Next time you're out, keep your compound eyes peeled for hexagons, regular and otherwise, and someday you too can catch the hexagon madness!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SbRBYLThBXI/AAAAAAAAAI8/DyQUzw-0oS8/s1600-h/01240601500852.jpg"><img style="cursor: pointer; width: 400px; height: 266px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SbRBYLThBXI/AAAAAAAAAI8/DyQUzw-0oS8/s400/01240601500852.jpg" alt="" id="BLOGGER_PHOTO_ID_5310941744183969138" border="0" /></a><br /><br />-DrM<br /><br />*Those in the know, take note: the reason the constant is 1 and not 2 is that I'm not counting the unbounded component as a face. However, it doesn't matter in the end, because the constant gets divided by <span style="font-style: italic;">F</span>, so this same property would be true in a topological space with any Euler characteristic.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com3tag:blogger.com,1999:blog-1920088135580776574.post-68717718125189388132009-02-27T20:47:00.001-08:002009-03-04T08:07:13.847-08:00Wholly Whexagons!<i>Dear Dr. Math,<br />I've noticed that hexagons show up in a lot of different places. Now that I've started looking for them, I see them everywhere! What's the deal with hexagons?<br />Jules, Canton OH</i><br /><br />Dear Jules,<br /><br />I know I'm not supposed to play favorites with mathematical objects, but I have to confess, the hexagon is probably my favorite shape. (Sorry, <a href="http://en.wikipedia.org/wiki/Rhombicuboctahedron" target="_blank">rhombicuboctahedron</a>.)<br /><br />While I suspect that you may be experiencing a fair amount of <a href="http://en.wikipedia.org/wiki/Confirmation_bias" target="_blank">confirmation bias</a>, it's true that hexagons do make an appearance in a large variety of different contexts. (It's also possible that you have the "hexagon madness" and are seeing them when they're not actually there. You might want to get that checked out.)<br /><br />Part of the reason hexagons are so ubiquitous is that they have so many useful properties, probably even more than "familiar" shapes like squares and trapezoids. Primarily, I'm referring to <i>regular</i> hexagons--hexagons with 6 equal sides--like this guy:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SajJqpz_UDI/AAAAAAAAAGE/zLJiDIwrt9k/s1600-h/400px-Regular_hexagon.svg.png"><img style="cursor: pointer; width: 220px; height: 220px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SajJqpz_UDI/AAAAAAAAAGE/zLJiDIwrt9k/s320/400px-Regular_hexagon.svg.png" alt="" id="BLOGGER_PHOTO_ID_5307713895471730738" align="center" border="0" /></a><br /><br />Probably these are the ones you're seeing, Jules. Next time, I'll talk a little about the properties of <i>irregular</i> hexagons and why you might expect to see those, too.<br /><br />First of all, a regular hexagon has the property that its opposite sides are parallel to each other, making it an ideal shape for a nut or bolt, because it fits nicely into a wrench:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SajJ6_og6eI/AAAAAAAAAGM/TBBn2KS0FZY/s1600-h/P431.jpg"><img style="cursor: pointer; width: 213px; height: 200px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SajJ6_og6eI/AAAAAAAAAGM/TBBn2KS0FZY/s320/P431.jpg" alt="" id="BLOGGER_PHOTO_ID_5307714176207088098" align="center" border="0" /></a><br /><br />Squares, octagons, and some other <span style="font-style: italic;">n-</span>gons [those with even <span style="font-style: italic;">n</span>] have the same property, but with a hexagonal nut or bolt, you can grab it at a variety of different <i>angles</i>, which is useful if you're putting together Ikea furniture in a tiny Manhattan apartment, for example. Also, since the exterior angles of an <span style="font-style: italic;">n-</span>gon add up to 360° and there are <span style="font-style: italic;">n</span> of them, each one measures 360/<span style="font-style: italic;">n</span>. Therefore, more sides aren't really so good, because as the number of sides gets larger, the sharpness of the corners decreases, allowing for a greater possibility of slippage. A hexagon seems to be a nice compromise between a 2-gon and an <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinfty" align="center" />-gon for these purposes.<br /><br />Another important property of regular hexagons (that I'm sure you're aware of if you've ever looked at the floor of a public bathroom) is that they <i>tile</i> the plane. In other words, by putting a bunch of identical hexagons together as tiles, we can cover an entire plane surface:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SajKLg_J6xI/AAAAAAAAAGU/GH8WAhz7eO4/s1600-h/320px-Tile_6,3.svg.png"><img style="cursor: pointer; width: 257px; height: 257px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SajKLg_J6xI/AAAAAAAAAGU/GH8WAhz7eO4/s320/320px-Tile_6,3.svg.png" alt="" id="BLOGGER_PHOTO_ID_5307714460038327058" align="center" border="0" /></a><br /><br />There are other tilings, of course, with squares or triangles, but this one has some very appealing aspects. (For one, it's made of hexagons!) It turns out that among all possible tilings of the plane of shapes with a fixed <span style="font-style: italic;">area</span>, the hexagonal one has the smallest possible <i>perimeter</i>.<br /><br />One way to think about this is that the perimeter:area ratio goes <span style="font-style: italic;">down</span> as a shape gets closer to being a circle. So, if we're using <span style="font-style: italic;">n</span>-gons to tile the plane, we want <span style="font-style: italic;">n</span> to be as big as possible. On the other hand, we have to be able to glue them together so that at each point of intersection, the angles add up to 360°. By the same reasoning as before, we can show that the <span style="font-style: italic;">interior</span><br />angles on an <span style="font-style: italic;">n</span>-gon are each 180 - 360/<span style="font-style: italic;">n</span>, and since there have to be at least 3 of these angles meeting at each corner, the greatest this angle can be is 120°. In this case, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?180%20-%20%5Cfrac%7B360%7D%7Bn%7D%20=%20120" align="center" />, so 360/<span style="font-style: italic;">n</span> = 60; therefore, <span style="font-style: italic;">n</span>=6. Hexagon!<br /><br />Now, why does all of that matter? Well, say you weren't cutting these tiles out of a piece of ceramic but instead were building up <i>walls</i> to section an area into a number of <i>chambers</i>. If the material in those walls was really expensive for you to produce, it would <b>bee</b> in your best interests to make the chambers in the shape of a regular hexagon:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SajKZtydTAI/AAAAAAAAAGc/1Dg25QO_NYE/s1600-h/Honey_comb.jpg"><img style="cursor: pointer; width: 320px; height: 240px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SajKZtydTAI/AAAAAAAAAGc/1Dg25QO_NYE/s320/Honey_comb.jpg" alt="" id="BLOGGER_PHOTO_ID_5307714703992900610" align="center" border="0" /></a><br /><br />I don't know how bees managed to figure this out and yet here we are still living in rectangular grids like chumps.<br /><br />A slightly different, but related, property of the regular hexagonal tiling is that it shows up if you're trying to <i>pack</i> together some circles:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_zy58RlQeD60/SajKphd6CsI/AAAAAAAAAGk/YFFuaE-VlPw/s1600-h/oranges.JPG"><img style="cursor: pointer; width: 320px; height: 240px;" src="http://3.bp.blogspot.com/_zy58RlQeD60/SajKphd6CsI/AAAAAAAAAGk/YFFuaE-VlPw/s320/oranges.JPG" alt="" id="BLOGGER_PHOTO_ID_5307714975563385538" align="center" border="0" /></a><br /><br />See the hexagons?<br /><br />Once again, this way of packing circles has the property of being <i>optimal</i>, in the sense that it leaves the least amount of empty space between circles. In fact, using a little <a href="http://doctormath.blogspot.com/2009/02/trig-or-treat.html" target="_blank">trigonometry</a>, we can even work out the efficiency of this packing:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SajLbtB3ojI/AAAAAAAAAGs/6uGv26rKEfU/s1600-h/263px-Empilement_compact_plan.svg.png"><img style="cursor: pointer; width: 239px; height: 228px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SajLbtB3ojI/AAAAAAAAAGs/6uGv26rKEfU/s320/263px-Empilement_compact_plan.svg.png" alt="" id="BLOGGER_PHOTO_ID_5307715837660471858" align="center" border="0" /></a><br /><br />The triangle in the picture is equilateral with all sides equal to <span>2</span><span style="font-style: italic;">*r</span>, where <span style="font-style: italic;">r</span> is the radius of the circles we're packing. If we split one in half (where the two black circles intersect), we'll get a right triangle with hypoteneuse 2<i style="font-style: italic;">*r</i> and one leg <i>r</i>. Therefore, by the Pythagorean Theorem, if <i>h</i> is the height, then <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?h%5E2%20+%20r%5E2%20=%20%282r%29%5E2%20=%204r%5E2" align="center" /><i>.</i> So <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?h%5E2%20=%203r%5E2" align="center" />, and therefore, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?h%20=%20%5Csqrt%7B3r%5E2%7D%20=%20r%5Csqrt%7B3%7D" align="center" />. That means the area of the triangle is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D%20*%5Ctext%7Bbase%7D*%5Ctext%7Bheight%7D" align="center" /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?=%20%5Cfrac%7B1%7D%7B2%7D%20*%20%282r%29*r%5Csqrt%7B3%7D%20=%20r%5E2%20%5Csqrt%7B3%7D" align="center" />.<br /><br />Inside each triangle are three pieces of a circle, which together make up half of a circle of radius <span style="font-style: italic;">r</span>. Thus, the area of the circular pieces is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B%5Cpi%20r%5E2%7D%7B2%7D" align="center" />. This means the ratio of circular area to total area of the triangle is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%28%5Cfrac%7B%5Cpi%20r%5E2%7D%7B2%7D%29/%28r%5E2%20%5Csqrt%7B3%7D%29%20=%20%5Cfrac%7B%5Cpi%7D%7B2%5Csqrt%7B3%7D%7D" align="center" />, approximately 0.91. Since the whole plane is made up of these triangles, the proportion of circle-area to total-area is the same, meaning the circles take up about 91% of the space. Pretty efficient, and fun at parties, too!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SajLoo5QR-I/AAAAAAAAAG0/Ti2lR4vTJH4/s1600-h/beer.jpg"><img style="cursor: pointer; width: 320px; height: 295px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SajLoo5QR-I/AAAAAAAAAG0/Ti2lR4vTJH4/s320/beer.jpg" alt="" id="BLOGGER_PHOTO_ID_5307716059888895970" align="center" border="0" /></a><br /><br /><br /><span style="font-style: italic;">To Be Continued...</span><br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com10tag:blogger.com,1999:blog-1920088135580776574.post-24006210179614853182009-02-24T21:03:00.000-08:002009-02-25T07:07:17.239-08:0080085With Valentine's Day just passed and Ash Wednesday lurking around the corner, I know the topics of sex and pregnancy are on a lot of people's filthy guilt-ridden minds these days. To help people understand their risks, and to show I'm not a prude, I'm hosting a little get-together (an orgy, if you will) of questions all about sex. So turn the lights down low, put on some soft music, and enjoy this special "adults only" post about what we in the math business call "multiplication."*<br /><i><br /><br />Dear Dr. Math,<br />I read in an article that "Normally fertile couples have a 25 percent chance of getting pregnant each cycle, and a cumulative pregnancy rate of 75 to 85 percent over the course of one year." How do you go from 25% to 85? I don't see the connection between those two numbers.<br />Name Withheld</i><br /><br />As is often the case, Name, the way to understand the probability of getting pregnant over some number of time intervals (I almost wrote "periods" there but then reconsidered) is instead to think about the probability of <i>not</i> getting pregnant during any of those intervals. We can use the fact that the chance of something happening is always 1 minus the chance of it <i>not </i>happening. This turns out to be a generally useful technique whenever you're interested in the occurrence of an event over multiple trials. To take my favorite <a href="http://doctormath.blogspot.com/2009/02/lets-make-deal-or-no-deal.html">over-simplified example</a> of flipping a coin, if we wanted to find the chance of flipping an H (almost wrote "getting heads"--geez, this is har.., er, difficult) in the first 3 flips, we could go through all of the possible 3-flip sequences and count how many of them had at least one H, or we could just observe that only <i>one</i> sequence <i>doesn't </i>contain an H (namely, TTT). Since the probability of flipping T ("getting tails") is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" /> on each flip, the chance of "doing it three times" is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%5E3%7D%20=%20%5Cfrac%7B1%7D%7B8%7D" align="center" />. Thus, the probability of at least one H is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?1%20-%20%5Cfrac%7B1%7D%7B8%7D%20=%20%5Cfrac%7B7%7D%7B8%7D" align="center" />. Phew.<br /><br />Similarly here, there are lots of different ways to get pregnant over the course of a year (believe me), but only one way to <i>not</i> get pregnant. If we take the first statistic as correct, that the chance of a normally fertile couple getting pregnant in each cycle is 25%, then we could assume that the chance of not getting pregnant in each cycle was 75%, or 0.75. Assuming a "cycle" is 28 days long, there would be 13 cycles per year, so by the same reasoning as above, we could say that the chance of not getting pregnant in a year is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%280.75%29%5E%7B13%7D=0.024" align="center" />, about 2.4%. So, the chance of "being in the family way" at some point during the year would be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?1-0.024%20=%200.976" align="center" />, or 97.6%.<br /><br />Now, that doesn't match up with the observed number you quoted, 85%. In the study, of course, all they do is assemble some group of "normally fertile" couples and count the number of times they get pregnant in a year. We were trying to solve the problem "top down" whereas the data is observed from the "bottom up." What's going on? Well, the problem was our assumption that the different cycles were <i>independent</i> from each other, in the sense that knowing what happened in one cycle doesn't affect our estimation of what will happen in the next. For coin-flipping, this is a reasonable assumption, but for copulation, not so much. It makes sense that there should be some <i>correlation</i> between the different cycles, because the possible causes for infertility one month might continue to be true the next. For example, it could be that either or both partners have some kind of medical condition that makes conception less likely. Or maybe the guy's underwear is too tight, I don't know. But it seems that the assumption of independence probably doesn't hold. Also, it's not entirely clear what's meant by "normally fertile" here, since (as far as I know) it's only really possible to know if a couple is "fertile" if they've succeeded in having a baby. So, it's possible that the data includes some number of couples who were just less fertile and perhaps didn't know it.<br /><br />The correct way to understand these compound probabilities is to consider the probability of not conceiving in one cycle <i>conditional</i> on the event that you had not conceived the cycles previously. Unfortunately, I don't have access to that information from personal experience, nor a good mental model for what numbers would be reasonable. However, it seems like the probability of <i>not </i>conceiving should be <i>higher</i> than ordinary if you know already that you've gone some number of months without conceiving. As a result, the odds of getting pregnant in a year should be <i>lower</i> than our estimate assuming independence, which does in fact agree with the data.<br /><i><br /></i><i><br />Dear Dr. Math,<br />Planned Parenthood's web site says, "Each year, 2 out of 100 women whose partners use condoms will become pregnant if they always use condoms correctly." Is that the same as saying that condoms are 98% effective? If so, does that mean that if you have sex 100 times, you'll likely get somebody pregnant twice? (I mean, if you're a man. If you're a woman I imagine the rate of impregnating your partner will probably slip in the direction of zero.) Yours always,<br />Name Withheld</i><br /><br />Oh, you freaky Name Withheld, you've asked the question backwards! In fact, the statistic you give of 2 women out of 100 becoming pregnant in a year is how the effectiveness of condoms is <i>defined.</i> That is, in the birth control industry, specifically, when someone claims that a particular method is "<i>x</i>% effective," it <i>means</i> that if a group of women use that method, over the course of the year about (100-<i>x</i>)% of them will get pregnant. Now, there are a number of assumptions being made here, not the least of which is that those women (and their partners) used the method correctly. Without actually going into people's bedrooms (or living rooms, or kitchens?) and tallying up on a clipboard whether their condom use was "incorrect", it's impossible to know for sure. Instead, people who do surveys of this kind have to rely almost exclusively on what people <i>say</i> they did. And let me ask you something: If you accidentally impregnated someone/got impregnated by someone while nominally using some birth control method, would you say, when asked, that you had been using it "incorrectly"? Or would you, as all good carpenters do, blame your tools?<br /><br />Another implicit assumption is that the respondents reflect a typical number of sexual encounters in a year. Again, I don't know how they decide what participants to include in this kind of study or how they verify the claims they get, but according to <a href="http://www.ncbi.nlm.nih.gov/pubmed/7738077">some</a> <a href="http://family.jrank.org/pages/1102/Marital-Sex-Sexual-Frequency.html">studies</a> I was able to find, the average "coital frequency", as it's romantically known, for both married and single people in the U.S. is somewhere around 7 encounters per month. Therefore, if we treated the experiences as being independent (with the same caveat as in the previous question), we could estimate the probability of unintended pregnancy in a <i>single sexual encounter</i>:<br /><br />Let's call the probability <i>p</i>. So the chance of <span style="font-style: italic;">not</span> getting pregnant during a given sex act is (1-<i>p</i>). We'll accept the 7 times/month figure and assume a total of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?7*12=84" align="center" /> sexual encounters per year, all including correct condom usage. As in the coin example, we've assumed independence, so the probability of <i>not</i> getting pregnant over the course of 84 trials is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281-p%29%5E%7B84%7D" align="center" />, which we're assuming is equal to the stated number of 98%. Therefore, we have:<br /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281-p%29%5E%7B84%7D=0.98" align="center" /><br />And so <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281-p%29%20=%200.98%5E%7B1/84%7D%20=%200.99975" align="center" />, meaning that <i>p</i> is very small, about 0.02%. Therefore, if you had sex 100 times, as you say (and congrats, btw), you could expect to make an average of 0.02 babies.<br /><br />Some important notes:<br />1) Our assumption of independence here may be more reasonable than in the previous example, because it's possible that whatever factors contribute to a birth control method failing despite proper use may be due more to chance than any kind of recurring trends.<br />2) Also, these numbers don't account for the fact that (as we saw above) the chance of getting pregnant in a year even <span style="font-style: italic;">without any protection</span> is something like 85%. So, in a sense, condoms "only" reduce the risk of pregnancy from 85% to 2%.<br />3) We've only been talking about pregnancy here, not the risks of other things like STDs or panic attacks.<br />4) Wear a condom, people!<br /><br /><br /><i>Dear Dr. Math,<br />Mathematically speaking, what number makes for the best sexual position?<br />Name Withheld<br /><br /></i>You seem to be asking a lot of questions, NW.<br /><br /><a href="http://xkcd.com/487/">Personally, I've always enjoyed the ln(2π)</a>.<br /><br />-DrM<br /><br />*Also acceptable: <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cint%20e%5Ex" align="center" /> or "integration by parts".drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com4tag:blogger.com,1999:blog-1920088135580776574.post-39347738434360062432009-02-23T00:39:00.000-08:002009-02-24T20:10:47.255-08:00The Infinite Monkey Strikes BackI've gotten some very interesting responses to <a href="http://doctormath.blogspot.com/2009/02/in-hole-in-grou31aadnm-vnatoh424.html">my post</a> about the Infinite Monkey Theorem, concerning the likelihood of a monkey accidentally reproducing <i>The Hobbit</i> by randomly generating letters. So I thought I'd write a follow-up to address some of them; also it gives me another chance to imagine a monkey typing on a typewriter. (I tried letting a monkey type this up for me, but he wrote something much more interesting than I had planned.)<br /><br /><br /><i>Dear Dr. Math,<br />I believe the monkey problem makes the simplifying assumption that all the letters in the text are independent, which they're not in real English (or any other language). How does this affect the results? Real texts are sampled very narrowly from the space of possible letter sequences.<br />CN</i><br /><br />Excellent question, CN; I'm glad you brought it up. In fact, the distribution of the letters in the text is irrelevant to the problem. The important assumption is that the letters <i>being output by the monkey</i> are equally likely and probabilistically independent of each other. Under this assumption, it doesn't matter if the text the monkey's trying to match is <i>The Hobbit</i> or the phone book or a sequence of all "7"s--the probability of matching any sequence of 360,000 characters will be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B50%5E%7B360,000%7D%7D" align="center" />. If you need convincing, consider the simpler example of flipping a fair coin 5 times. Any particular sequence of 5 flips, for example HHTHT, has the same probability, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%5E5%7D" align="center" />, of coming up. So, if we were trying to match any flip sequence, we'd have the same chance. The same is true here, just on a much larger scale (and with monkeys).<br /><br />Now, many people object to this idea, because they think that a letter sequence like "a;4atg 9hviidp" is somehow more "random" than a sequence like "i heart hanson". Therefore, they reason, the first sequence would be more likely to occur by chance. But actually the two sequences have exactly the same probability of occurrence, under our assumptions. Really, the only difference between the two is what we could <i>infer</i> about the message source (beyond musical tastes) based on receiving such an output. I hope to discuss this in detail someday in the context of <a href="http://en.wikipedia.org/wiki/Hypothesis_testing">elementary hypothesis testing</a>, but if, say, there were some <i>doubt</i> in our minds as to whether the source of these characters was, in fact, uniformly random, the latter message would give us considerable evidence to help support that doubt. The reason is that we could provide an <i>alternative</i> hypothesis that would make the observed data much more likely. For the monkey problem, however, we were assuming that we knew how the characters were being generated, so there's no doubt.<br /><br /><br /><i>Dear Dr. Math,<br />Along the lines of your previous questions on large numbers and randomly generating the book The Hobbit, I'd like to ask about randomly generating images. A low res PC display is normally 640 x 480 pixels. If you randomly generated every combination of color pixels, wouldn't you have created every image imaginable at that resolution? That is, one of the screens would be the Mona Lisa, one would be your Ask Doctor Math page, one would be a picture of the Andromeda galaxy from close up, one would be a picture of you!, etc. If you only wanted to look at black & white images, you'd have a much smaller collection, but once again wouldn't you generate every B&W screen image possible? </i> <div><i>With feature recognition software getting better all the time, one could "mine" these images for recognizable features. Similar to the way the pharmaceutical companies sequence through millions of plants to find new substances, one could sequence through these images to extract unknown info.<br />Mike<br /></i><br />Dear Mike,<br /><br />Absolutely, we could apply the same techniques to any form of information that can be reduced to a sequence of numbers or letters, like images, CDs, chess games, DNA sequences, etc. In fact, we needn't generate them randomly, either. As in <i>The Library of Babel</i> example, one could imagine a vast collection of all possible sequences, generated systematically one at a time with no repeats. Unfortunately, for any <i>interesting</i> form of information, the number of possibilities is simply too great to make it practical.<br /><br />In your example of 640x480 pixel images, even assuming the images were <a href="http://www.photoshopessentials.com/images/essentials/black-white/channel-mixer/monochrome-image.jpg">1-bit monochrome</a>, there would still be 2 possibilities ("on" or "off") for each of the 640*480 = 307,200 pixels. Therefore, the number of possible images would be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?2%5E%7B307,200%7D" align="center" />, which is about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B92,476%7D" align="center" />. Remember <a href="http://doctormath.blogspot.com/2009/02/how-big-is-that-number-episode-1.html">how big a googol is</a>? Well, this number is about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281%20%5Ctext%7B%20googol%7D%29%5E%7B1000%7D" align="center" />. So, not even all the crappy low-res monitors in the universe could possibly display them, even at their lousy maximum refresh rate of 60 images/second. And even worse, we'd have no reason to believe any of the images we <i>did</i> see, because they'd be indistinguishable from all the many other conflicting images.<br /><br />Your comparison to pharmaceutical companies is interesting, but remember those companies are starting with a large (but manageable) collection of plants that <i>actually exist</i>, not searching through the space of all possible arrangements of plant cells or something. It's OK to search for a needle in a haystack sometimes, but not when the haystack is larger than the known universe.<br /><br /><br /><i>Dear Dr. Math,<br />Unless I misunderstand (and that's quite possible), I think you've introduced a major flaw here... The "second chunk" begins at character number 2, not character number 360,001. There is no reason why these should be considered discrete chunks and so just because the first character isn't "I" doesn't affect the fact that the second and subsequent characters may spell out the work. Thusly, your monkeys are producing over 17 million "blocks" a day, not just 48...<br /></i><i>A. Nonymous<br /><br /></i>Well, A, that all depends on how we set up our assumptions. The way I had pictured things, the monkey was typing out a whole manuscript of 360,000 characters at a time and then having someone (perhaps J.R.R. Tolkien himself!) check it over and see if it was exactly the same as <i>The Hobbit</i>. If not, the monkey tries again and, with very high probability, fails.<br /><br />However, your idea is more interesting and perhaps more "realistic". That is, we could have Prof. Tolkien just watch over the monkey's shoulder as it typed and see if <i>any string</i> of 360,000 consecutive characters were <i>The Hobbit</i>. So, if the monkey started by typing a preamble of gibberish and then typed the correct text in its entirety, we'd still count it as correct. As you say, this means that the possible "chunks" we'd need to consider have a lot of overlap to them--we might find the text in characters 1 through 360,000 or 2 through 360,001, etc. But unfortunately, it's not just the number of chunks being produced we need to reconsider; because of the way they overlap, we've now introduced <i>relationships</i> between the chunks that mean our assumption of independence no longer holds. For example, if we knew the first block of characters was incorrect, we could determine whether it was even possible for the second block to be correct based on the particular <i>way</i> the first block was wrong. In fact, we'd know it was impossible unless the first block was something like "xin a hole in the ground there lived a hobbit...".<br /><br />Actually, if we thought about things in this way, then CN's question above <i>would</i> be relevant, because the codependency of the overlapping chunks would depend heavily on the <i>particular</i> text we were trying to match. Consider the example of coin-flipping again: assume we were flipping the coin until we got the string TH. There are 3 possible ways we could fail on the first pair of flips, all equally likely: TT, HT, and HH. If we got TT or HT, then we could succeed on the third try by flipping an H. If we started with HH, there's no way we could get TH on the third flip. The number of ways of succeeding would be 2, out of a possible 6. So the probability of succeeding in the second block given that we failed in the first would be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B6%7D=%5Cfrac%7B1%7D%7B3%7D" align="center" />.<br /><br />Now, if we were trying to match HH and we knew we failed on the first 2 flips, there would still be 3 equally likely possibilities. Either we flipped TT, TH, or HT. If we started off with TT or HT, we can't possibly win on the third flip. But if we got TH first, we'd have a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" /> chance of flipping H on the third flip and matching. Thus, our probability of matching in the second block given that we failed in the first would only be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B6%7D" align="center" />. Here's a chart showing all of the possibilities:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_zy58RlQeD60/SaJhM-qOZ8I/AAAAAAAAAF0/kCqI1RBtMOo/s1600-h/matching_coinflips.JPG"><img style="cursor: pointer; width: 400px; height: 314px;" src="http://2.bp.blogspot.com/_zy58RlQeD60/SaJhM-qOZ8I/AAAAAAAAAF0/kCqI1RBtMOo/s400/matching_coinflips.JPG" alt="" id="BLOGGER_PHOTO_ID_5305910186602293186" align="center" border="0" /></a><br /></div> <div><br /></div> <div> </div> <div> </div> <div>The two probabilities are different because TH can overlap in more ways with the wrong texts, whereas HH can only overlap with TH.<br /></div> <div><br />Therefore, our previous strategy of multiplying probabilities, which rested on the assumption of independence, won't work here. In order to explain how long it would take the monkey to produce <i>The Hobbit</i> with high probability under your scheme, I'd have to go into some fairly heavy-duty math involving <a href="http://en.wikipedia.org/wiki/Markov_chain">Markov chains</a> and their transition probabilities. The relevant probabilities can be found by raising a 360,000 x 360,000 matrix to the <i>n</i>th power--not generally an easy thing to do. But it turns out that the <i>expected </i>(i.e., <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-1.html">average</a>) number of characters the monkey would have to type before finishing would still be on the order of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?50%5E%7B360,000%7D" align="center" />, similar to the the previous setup. </div> <div> </div> <div><br />Either way, you and J.R.R. would have probably given up by that point.<br /></div> <div><br />-DrM<br /><br /></div>drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com3tag:blogger.com,1999:blog-1920088135580776574.post-26986865576974264232009-02-21T14:09:00.000-08:002009-02-23T02:00:28.750-08:00What "mean" means<i>Dear Dr. Math,<br />My parents live about 200 miles away from me, so I make the drive back and forth a lot, with no stops. Almost exactly halfway in between the speed limit changes, so instead of driving 55 mph I drive 80 mph. Since my average speed is 67.5 mph, shouldn't it take me 200/67.5 = 2</i><span style="white-space: nowrap;">.96</span><i> hours to get there? I've noticed it always takes a little longer, but I don't get it. I've even set the cruise control and kept the speeds exactly constant.<br />Chuck<br /><br /></i>Dear Chuck,<br /><br />I'm going to go ahead and assume that you live in one of those places in <a href="http://en.wikipedia.org/wiki/Speed_limits_in_the_United_States" target="_blank">Utah or west Texas</a> where the speed limit actually is 80 mph. Otherwise, you've been speeding, and I can't endorse that kind of behavior. OK? OK. Don't make me write a post about the correlation between speeding and traffic fatalities. I swear I will turn this blog around.<br /><br />Here's why your numbers didn't add up: while it's true that the average, in the sense of <a href="http://en.wikipedia.org/wiki/Arithmetic_mean" target="_blank">arithmetic mean</a>, of 55 and 80 is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B55+80%7D%7B2%7D=67.5" align="center" /> mph, that's actually the wrong kind of average to be using in this circumstance. "Kinds of averages?" Oh yes. Allow me to explain:<br /><br />In the course of your trip, you drive half the distance, 100 miles, at 55 mph. So that leg takes you <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B100%7D%7B55%7D=1.81" align="center" /> hours. On the second half, you're going the<span style="font-weight: bold;"> legal speed limit</span> of 80 mph, so that half should take you <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B100%7D%7B80%7D=1.25" align="center" /> hours. Altogether, then, your driving time is 1.81 + 1.25 = 3.06 hours, a little more than you expected.<br /><br />Rather than the arithmetic mean here, you should have been calculating your <a href="http://en.wikipedia.org/wiki/Harmonic_mean" target="_blank">harmonic mean</a>, which for two numbers <span style="font-style: italic;">A</span> and <span style="font-style: italic;">B</span> is defined as <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B%5Cleft%28%5Cfrac%7B1%7D%7BA%7D%20+%20%5Cfrac%7B1%7D%7BB%7D%5Cright%29%7D" align="center" />. To see why that's the right quantity, let's denote by <span style="font-style: italic;">S</span> your real average speed for the trip, that is, the total distance you traveled divided by your total time. If <span style="font-style: italic;">T</span> is the total time you spent driving, then <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?S%20=%20%5Cfrac%7B200%7D%7BT%7D" align="center" />; equivalently, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?T%20=%20%5Cfrac%7B200%7D%7BS%7D" align="center" />. If <span style="font-style: italic;">A</span> is the speed you went for the first half and <span style="font-style: italic;">B</span> is the speed for the second half, then another way you could calculate the total time is as <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?T%20=%20%5Cfrac%7B100%7D%7BA%7D%20+%20%5Cfrac%7B100%7D%7BB%7D" align="center" />, just like we did previously. As usual, in math when we compute the same thing two different ways we end up with an interesting equation. In this case, since the times are equal, we get:<br /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B200%7D%7BS%7D%20=%20%5Cfrac%7B100%7D%7BA%7D%20+%20%5Cfrac%7B100%7D%7BB%7D" align="center" />.<br />Dividing through by 100 on both sides gives us<br /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7BS%7D%20=%20%5Cfrac%7B1%7D%7BA%7D%20+%20%5Cfrac%7B1%7D%7BB%7D" align="center" /><br />which, if you take reciprocals of both sides and multiply by 2, yields the formula for the harmonic mean. In this particular example, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?S%20=%20%5Cfrac%7B2%7D%7B%5Cleft%28%5Cfrac%7B1%7D%7B55%7D%20+%20%5Cfrac%7B1%7D%7B80%7D%5Cright%29%7D=65.1" align="center" /> mph, so your guess of 67.5 mph was only off by a little bit.<br /><br />So, when is the arithmetic mean the right one? If you had gone on a trip and spent an equal amount of <i>time</i> driving 55 mph and 80 mph, then your average speed <i>would</i> be the arithmetic mean of the two. To see that, let's just assume you drove 1 hour at each speed. Thus, your total distance traveled would be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?55*1%20+%2080*1%20=%20135" align="center" /> miles, and your total time is 2 hours, so the average speed is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B135%7D%7B2%7D=67.5" align="center" /> mph. Voilà! If you look at that calculation closely, you can pretty clearly see why it should always give you the arithmetic mean--you're just adding the two speeds together and dividing by 2. Similarly, another way to see that the arithmetic mean is inappropriate for the equal <i>distance</i> problem is to notice that by driving the same distance at each speed, you spend <i>more time</i> at the slower speed and <i>less time</i> at the faster one.<br /><br />There's actually yet another kind of mean, called the <a href="http://en.wikipedia.org/wiki/Geometric_mean" target="_blank">geometric mean</a>, which shows up when you're computing ratios, percents, interest rates, and other things that are typically multiplied together. For two numbers A and B, it's defined as <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?G%20=%20%5Csqrt%7BA*B%7D" align="center" />. For example, let's say you were a rabbit farmer and your population of rabbits grew by 50% one year and only 10% the next. The combined effect at the end of two years would be that the population had increased by a factor of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?1.50*1.10%20=%201.65" align="center" />, for an increase of 65%. To achieve that same growth at a constant rate, say a factor of R <span style="font-style: italic;">for each year</span>, you'd need <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?R%20*%20R%20=%201.65" align="center" />, so <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?R%20=%20%5Csqrt%7B1.50*1.10%7D%20=%201.28" align="center" />. So in a sense the "average" growth rate was 28% per year. Many people in this kind of situation would be tempted to guess that the average was 30%, splitting the difference between 50% and 10%. You can see that it's not far off from the truth, but it's not quite right. And why be almost right when you can be exactly right?<br /><br />The point of all these means is to replace the <i>net effect</i> of two <i>different</i> values with the effect of just a <i>single</i> value repeated. But you have to be careful to consider exactly how those quantities are <i>interacting</i> to produce that combined effect. When they simply add together, the relevant type of mean is the arithmetic one, when they multiply, the correct mean is geometric, and when they do that weird thing of combining via their reciprocals, you use the harmonic mean. Interestingly enough, for any two numbers, if <span style="font-style: italic;">M</span> is their arithmetic mean, <span style="font-style: italic;">G</span> is the geometric mean,and <span style="font-style: italic;">H</span> is the harmonic mean, it's always the case that <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?M%20%5Cge%20G%20%5Cge%20H" align="center" />. In fact, there are <a href="http://en.wikipedia.org/wiki/Quadratic_mean">other</a> <a href="http://en.wikipedia.org/wiki/Heronian_mean">means</a>, <a href="http://en.wikipedia.org/wiki/Arithmetic-geometric_mean">too</a>, but these three are the major players.<br /><br />Other situations where the harmonic mean might come up include: calculating average fuel economy of a car given an equal amount of city and highway driving, computing the total length of time it takes two people working together to complete a task, figuring out the net resistance of two electrical resistors in <a href="http://en.wikipedia.org/wiki/Series_and_parallel_circuits#Parallel_circuits" target="_blank">parallel</a>, finding a pleasant <a href="http://www.harmonictheory.com/music/" target="_blank">harmonic note</a> (hence the name) between two other musical notes, calculating the height of the intersection between two crossed wires, and answering questions about the uses of the harmonic mean!<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-18624472769490111852009-02-20T01:29:00.000-08:002009-02-25T07:17:27.635-08:00Let's Make a Deal or No Deal<i>Dear Dr. Math,<br />On the show Deal or No Deal, if the contestant gets to the point of only having two cases left they have the option to switch cases. Should they switch or not? Is this the same as the Monty Hall problem?<br />Daniel G.<br /><br /><br /></i>As Scott Bakula would say, <a href="http://www.imdb.com/title/tt0096684/quotes" target="_blank">Oh boy</a>. I guess there was no way I was going to get away with writing a math advice blog and not having to explain the Monty Hall Problem at some point. For those of you out there who may be unfamiliar with the MHP, here's the way it goes:<br /><br />You are presented with three doors and told that behind one door is a car and behind the other two are goats. (Here we're assuming you want the car and not the goats, but in these tough economic times maybe they should be reversed.) You pick a door and then the host, the venerable Monty Hall, always opens one of the other two doors to reveal a goat. He then offers you the chance to switch to the remaining third door. It turns out that it's always in your best interests to switch, given the available information. Doing so improves your chance of winning from <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B3%7D" align="center" /> to <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B3%7D" align="center" />.<br /><br />Now, I see some of you reaching for that email button, getting ready to fire off an angry letter about how it <i>just can't</i> be true that switching is better than not switching. After all, there are two remaining doors and you don't know which has the car, so aren't your odds 50-50? It's <i>impossible</i>! Believe me, I sympathize, but hold it right there. Plenty of people, even professional mathematicians, have said the same thing as you. Whole <a href="http://www.amazon.com/Monty-Hall-Problem-Remarkable-Contentious/dp/0195367898/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1235120503&sr=8-1" target="_blank">books</a> and <a href="http://www.montyhallproblem.com/" target="_blank">websites</a> have been devoted to this topic, people have written <a href="http://math.ucsd.edu/%7Ecrypto/Monty/monty.html" target="_blank">simulators</a> that you can try out for yourself, the advice columnist <a href="http://www.marilynvossavant.com/articles/gameshow.html" target="_blank">Marilyn vos Savant</a> essentially made her career by being right about this problem and explaining why. The MHP is math's version of an optical illusion--you can stare at it and stare at it, but until you actually get the ruler out and measure, you won't be convinced. The sad truth is: Ellen Tigh's a cylon, Darth Vader built C3PO, and switching doors in the Monty Hall Problem improves your chance of winning from <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B3%7D" align="center" /> to <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B3%7D" align="center" />.<br /><br />Instead of opening up all the old wounds the MHP has inflicted over the years, let me try to offer my own perspective on how <i>I</i> think about the problem (inflicting all-new wounds!), and then maybe we can take those same ideas and apply them to the <i>Deal or No Deal </i>question to show why it's different.<br /><br />Let's back the train up all the way to the station and talk a little about what probability is--what it <i>means</i>. <b>Warning: Heavy Philosophy-Type Stuff Ahead</b>. As I've mentioned previously, my opinion is that probability is a way to quantify the <i>uncertainty</i> we have about the state of the world. Therefore, it's highly dependent on what information we feel that we possess about the things we observe and what consequences the information may have. For example, everyone's favorite "random" activity is flipping a coin--assuming it's a "fair coin", the probability is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" /> that it will come up heads and <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" /> that it will come up tails. But what does that really mean? Physically, we can model all the variables that go into the action of flipping a coin--weight distribution of the coin, air resistance, amount and location of force applied to the coin, the direction the coin is tossed, elasticity of the landing surface, etc. If somehow we could measure all of these things between the time the coin was tossed and the time it landed, and if we had access to a powerful enough computing device, we could <i>predict</i> whether the coin would come up heads. At the very least, we could guess ("calling it in the air") and improve our chances to more than <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" />. Going back a step, the only parts of this system unknown to us ahead of time are the variables due to the tossing itself--the human element of thumb against coin. If, for example, we knew that the person tossing the coin were an amazingly skilled athlete who could control his hand and arm motions with extreme precision and who had practiced the technique of tossing a coin enough that he could reliably make it come up heads, we again could improve upon our 50-50 guess. As a third possibility, consider the case where the coin has <i>already</i> been flipped but we haven't seen the outcome yet (the referee's still holding it); if somebody could sneak a peek at part of the coin and tell us what they saw, we could update our information and make a better guess.<br /><br />So, what is the "real" probability? In my view, and this might be hard to swallow at first, the answer is there isn't one--the question itself is flawed. "Wait a minute," I can hear you objecting, "Can't we just perform experiments and measure the frequency of heads? Flip a coin a hundred times and about 50 of those will be heads, etc.?" The problem there is that <i>you're observing a different event each time</i>. You can never step in the river twice, nor flip the same coin. All the repetition does is validate the <i>predictive power</i> of your mental model that says that the factors that go into flipping coins are beyond your comprehension and result in the heads side and the tails side being equally likely. As an alternative, say, you could have the mental model (shared by many people) that those hundred coin flips were <i>predestined</i> to occur the way they did and that through meditation/prayer/drugs/etc. you can actually see into the future and predict the outcome of the next flip. It happens that the first model tends to be more successful than the second (or any others) in this instance, but we should be careful to separate the things we're assuming from the things we observe. As E.T. Jaynes wrote in <a href="http://www.amazon.com/Probability-Theory-Logic-Science-Vol/dp/0521592712/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1235121899&sr=8-1"><i>Probability Theory: The Logic of Science</i></a>, trying to verify the probability of an event by performing experiments "would be like trying to verify a boy's love for his dog by performing experiments on the dog."<br /><br />See, part of the problem with the way we humans interpret the world is that the physical laws we rely on--for example, that two colliding objects obey the law of <a href="http://en.wikipedia.org/wiki/Conservation_of_momentum#Conservation_of_linear_momentum" target="_blank">conservation of momentum</a>--can quickly outpace our abilities to calculate their consequences--say, the motions of every molecule of a balloon-full of air. We use probability as a way of approximating the behavior of these complex systems instead of having to understand them completely, but that doesn't mean that the events "are" random. A more powerful being might see things differently, the way adults see tic-tac-toe differently from the way little kids do. But we seem to be stuck with this uncertainty about complex systems. And there's really no system on Earth more complex than a human, which brings us back to the MHP.<br /><br />In the setup to the Monty Hall Problem, we've assumed some things, all of which pertain to the actions of other people. First, there is the assumption that the car is equally likely to be behind any of the three doors (actually, assumption zero is that there even is a car at all). Presumably, some producer or somebody chose which door to put it behind--it's possible they might have had a preference for door #1, for example, because it's closer to the loading dock or looks better on TV. If we had records of thousands of shows, we might gain some insight into their decision process and detect some bias. But we're assuming otherwise. Secondly, and this is the real key, we have the assumption that <i>Monty Hall knows which door has the car behind it</i>. As a consequence, we can deduce that by opening up the remaining door (or one of the two remaining doors, if we initially chose the one with the car), he has <i>added information</i> to the set of things we know about the game. Namely, we know that if the car <i>had</i> been behind one of the other two doors, he would have been <i>forced</i> to open the door he did--that's essentially why switching gives us a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B3%7D" align="center" /> chance of winning. If the other door had opened by <i>chance</i>, say a gust of wind blew it open and we happened to see the goat, then we'd have no reason to conclude anything about whether we should switch, because <i>we </i><i>just as easily could have seen the car</i>. So, by knowing what Monty knows, we can improve our chances. In coin terms, it's as though we had a prearranged deal with the referee where if the coin is tails, he just tells us half the time and stays quiet the other half, and if the coin is heads he always stays quiet--so if he doesn't speak, we know there's a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B3%7D" align="center" /> chance the coin is heads.<br /><br />Now, on <i>Deal or No Deal</i>, hosted by the incomparable Howie Mandel, the situation is somewhat different. For those who haven't seen the show, it works like this: a contestant picks one of 26 briefcases, each containing a different dollar amount. He/she then opens some or all of the remaining briefcases and decides whether to keep going or sell the initial case. In the extreme situation in which he/she keeps going all the way to the end and there are only two cases left, the contestant has the option to keep the original case or switch. Let's you and I pretend that we were on the show. For simplicity, let's assume that initially 25 of the 26 cases had $0, and the one remaining case had $1 million. Also, let's assume that we opened 24 cases and inside each one was a big fat $0 (we got to say "NO DEAL!" a bunch of times, which was fun; also, they brought out Ellen Degeneres at some point). What does that mean about our prospects? Should we switch? Well, our assumptions, again, were (1) all cases were equally likely to contain the million dollars, and (2) nobody on the show knew which case was which. Under those assumptions, it doesn't matter if we switch or not, since the probability is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" /> of each case having the million. It's just like the Monty Hall Problem if Monty didn't know which door had the car behind it--nobody has given us any additional information with which to prefer one case over the other. If, however, we knew <i>that Howie knew</i> which case had the winner, and <span style="font-style: italic;">he</span> had started the show by opening all the other cases, then we should absolutely switch in a heartbeat, because it would improve our chance of winning from <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B26%7D" align="center" /> to <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B25%7D%7B26%7D" align="center" />. It's all about what <i>information</i> Howie gives us. Also, if he could give us Anya's phone number while he's at it, that would help us out, too.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com5tag:blogger.com,1999:blog-1920088135580776574.post-40197793499609548972009-02-18T13:48:00.000-08:002009-03-03T19:23:10.409-08:00Trig or Treat<i>Dear Dr. Math,<br />This is a question I thought of while pondering the air intake of a wood stove. The air intake is a series of holes, covered or uncovered by a sliding metal plate with equal sized holes.<br />Imagine 2 circles with equal radius, R. Slide one circle over the other. Express, in terms of R, how far one circle has to occlude the other such that half of the area is covered.</i><br /><span style="font-style: italic;">Bob H., Ashland, OR</span><br /><br />Dear Bob,<br /><p></p>Here's a picture of the problem, if I understand it correctly:<br /><p style="font-family: georgia;" class="MsoNormal"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SZyemh9ke7I/AAAAAAAAAEM/2TplyJQ-XVE/s1600-h/trig-1.JPG"><br /></a></p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SZyez1AWNaI/AAAAAAAAAEU/GX7IFXiQthM/s1600-h/trig-1.JPG"><img style="cursor: pointer; width: 400px; height: 181px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SZyez1AWNaI/AAAAAAAAAEU/GX7IFXiQthM/s400/trig-1.JPG" alt="" id="BLOGGER_PHOTO_ID_5304289074374653346" align="center" border="0" /></a><br /><br />For legal reasons, before we get to the solution, I feel I should warn all you readers out there: what follows may involve some high-school level <span style="font-weight: bold;">trigonometry</span>, which I understand many of you have intentionally purged from your brains to make room for <span style="font-style: italic;">Grey's Anatomy </span>plots. Part of the reason I like this question so much is that it shows that these concepts may very well have some relevance (outside of the very important pursuit of measuring the heights of buildings using a sextant) despite your high school math teacher's best attempts to convince you otherwise. <span style="font-weight: bold;">Those readers who are subject to trigonometry-induced seizures should turn back now</span>.<br /><br />OK, with that out of the way, let's blow up part of the picture and label some of the relevant objects. The goal is to get a handle on this shady part of town:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SZygKmFcyqI/AAAAAAAAAEc/E7qPluCNwCE/s1600-h/trig-2.JPG"><img style="cursor: pointer; width: 201px; height: 194px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SZygKmFcyqI/AAAAAAAAAEc/E7qPluCNwCE/s400/trig-2.JPG" alt="" id="BLOGGER_PHOTO_ID_5304290565018143394" align="center" border="0" /></a><br /><img src="file:///C:/Documents%20and%20Settings/AIC/Desktop/trig-2.JPG" alt="" align="center" /><br />First, there's the radius, <span style="font-style: italic;">R</span>, which we've assumed is the same for the two circles. Let's call the angle formed by the center and two points of intersection <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" />. Note: this has nothing to do with <a href="http://en.wikipedia.org/wiki/Thetans">thetans</a> (or does it?). Splitting this angle down the middle forms two right triangles with an angle of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta%20/%202" align="center" />. According to the rules of <b>trigonometry</b>, the height of each triangle is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?R%20%5Csin%28%5Ctheta%20/%202%29" align="center" /> and the width is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?R%20%5Ccos%28%5Ctheta%20/%202%29" align="center" />, as I've labeled here:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SZyhw5wyk6I/AAAAAAAAAEk/2IHR5D3QMB0/s1600-h/trig-3.JPG"><img style="cursor: pointer; width: 290px; height: 245px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SZyhw5wyk6I/AAAAAAAAAEk/2IHR5D3QMB0/s400/trig-3.JPG" alt="" id="BLOGGER_PHOTO_ID_5304292322646856610" border="0" /></a><br />Now, the strategy I'd like to employ to compute the area of that funny little almond-shaped region, which I'll call <span style="font-style: italic;">C</span>, is to think of it as consisting of two pieces, each of which is the difference between a pie-slice of the circle, <span style="font-style: italic;">A</span>, and a triangle, <span style="font-style: italic;">B</span>. In pictures:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SZzQ4XatJhI/AAAAAAAAAFc/2LT5s2nP4Jo/s1600-h/trig-4.JPG"><img style="cursor: pointer; width: 416px; height: 133px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SZzQ4XatJhI/AAAAAAAAAFc/2LT5s2nP4Jo/s400/trig-4.JPG" alt="" id="BLOGGER_PHOTO_ID_5304344127912879634" border="0" /></a><br /><br />The reason this helps is that circles and triangles are shapes whose areas we know how to compute. "Funny little almond shapes," not so much.<br /><br />The area of the circular slice is in proportion to the whole area of the circle as the angle <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" /> is to the whole angle of a circle, 360° (a quarter of a circle takes up 90°, for example). So in terms of R and <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" />, that's <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7BA%7D%7B%5Cpi%20R%5E2%7D%20=%20%5Cfrac%7B%5Ctheta%7D%7B360%7D" align="center" /> and so <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?A%20=%20%5Cfrac%7B%5Ctheta%7D%7B360%7D%20*%20%5Cpi%20R%5E2" align="center" />.<br />Now, the area of the whole triangle is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D%20%28%5Ctext%7Bbase%7D%29*%28%5Ctext%7Bheight%7D%29" align="center" />, which in this case is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D%20R%20%5Ccos%28%5Ctheta/2%29%20*%202%20*%20R%20%5Csin%28%5Ctheta/2%29" align="center" />. So, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?B%20=%20R%5E2%20%5Ccos%28%5Ctheta/2%29%20%5Csin%28%5Ctheta/2%29" align="center" />. By a sneaky trick I learned in <span style="font-weight: bold;">trigonometry</span> class, I can rewrite this as <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?B%20=%20%5Cfrac%7B1%7D%7B2%7DR%5E2%5Csin%28%5Ctheta%29" align="center" />.<br /><br />If we throw all these things into the hopper, we get that the area of the almond-shaped piece, <span style="font-style: italic;">C</span>, is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?2*%28A-B%29" align="center" />, and so <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?C%20=%202*%5Cleft%28%5Cfrac%7B%5Ctheta%7D%7B360%7D*%5Cpi%20R%5E2%20-%20%5Cfrac%7B1%7D%7B2%7D%20R%5E2%20%5Csin%28%5Ctheta%29%20%5Cright%29" align="center" />. I can feel some of you starting to panic out there, but just take a deep breath and try to relax. Put on some Enya or something--maybe that song she wrote about <span style="font-weight: bold;">trigonometry</span>.<br /><br />What were we doing? Oh yeah, right; now we have a formula for computing the area of the overlapping part of the two circles, which only depends on the angle <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" />. The question was, When is this area equal to half the area of the circle?, so we need to <span style="font-style: italic;">solve</span> for <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" />. Half the area of the circle is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D%5Cpi%20R%5E2" align="center" />; therefore, the equation we need to solve is:<br /><br /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D%5Cpi%20R%5E2%20=%202*%5Cleft%28%5Cfrac%7B%5Ctheta%7D%7B360%7D*%5Cpi%20R%5E2%20-%20%5Cfrac%7B1%7D%7B2%7D%20R%5E2%20%5Csin%28%5Ctheta%29%5Cright%29" align="center" /><br /><br />which, after we divide through by <span style="font-style: italic;"></span><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?R%5E2" align="center" /> and clean up a bit leaves us with:<br /><br /><img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B%5Cpi%7D%7B2%7D%20=%20%5Cfrac%7B%5Cpi%7D%7B180%7D%20*%5Ctheta%20-%20%5Csin%28%5Ctheta%29" align="center" />.<br /><br />It's interesting to pause here and note that <span style="font-style: italic;">R </span>completely vanished from the equation. This means whatever configuration we come up with as an answer must have the same angle, independent of the radius.<br /><br />OK. So, how do we solve this equation?<br /><br />Actually... we don't.<br /><br />The problem is that we have a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" /> and a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Csin%28%5Ctheta%29" align="center" />, and the thing about those two is that they're like Sydney and Cristina on <span style="font-style: italic;">Grey's Anatomy</span>; they just don't mix well. Unfortunately, there's no way to get any further with this equation using the rules of algebra. So, here's where we cheat and approximate the solution with a calculator (in my case, a TI-89). Maybe someday I'll tell you all about <a href="http://en.wikipedia.org/wiki/Newton%27s_method">what goes on inside a calculator</a> when it does these approximations, or about how we could use calculus to solve the problem if the shape were something else. But anyway, for now, the answer is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta%20%5Capprox%20132.4%5E%7B%5Ccirc%7D" align="center" />.<br /><br />Lastly, we should translate this answer into a more meaningful form, for example by figuring out what the distance is between the two centers of the circles. Using <span style="font-weight: bold;">trigonometry</span> one last time, we can write this distance as <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?2*R%5Ccos%28%5Ctheta/2%29" align="center" />, which for the magic <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Ctheta" align="center" /> above is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?0.808*R" align="center" />. The final picture, then, is:<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_zy58RlQeD60/SZzTYI4OBFI/AAAAAAAAAFk/s4hphFrptyE/s1600-h/trig-5.JPG"><img style="cursor: pointer; width: 400px; height: 178px;" src="http://1.bp.blogspot.com/_zy58RlQeD60/SZzTYI4OBFI/AAAAAAAAAFk/s4hphFrptyE/s400/trig-5.JPG" alt="" id="BLOGGER_PHOTO_ID_5304346872789206098" border="0" /></a><br /><br /><br /><br />Put that in your wood stove and smoke it.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com1tag:blogger.com,1999:blog-1920088135580776574.post-18492817960884993792009-02-17T16:57:00.000-08:002009-02-23T01:24:49.304-08:00How Big is That Number? Episode 1This will be the first installment of a new feature I call "How Big is That Number?" in which I try to explain exactly how big that number is.<br /><i><br />Dear Dr. Math,<br />How much space would a googol grains of sand take up?<br />xoxo,<br />Frequent Googol Searcher<br /><br /></i>Dear FreGooSear,<br /><br />Let me begin by thanking you for reminding us all of the correct spelling of "googol". A lot of people forget that before it inspired the name of <a href="http://www.google.com/">an obscure website</a>, the word "googol", as coined by a man named Milton Sirotta back in 1938, referred to the number <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B100%7D" align="center" />. I don't know the whole story here, but I'm guessing that as an infant, he somehow wrote a 1 followed by 100 zeroes (I'm guessing in blue crayon), and when asked by his parents what that was, all he could make were baby sounds. Hence the name. Any other explanation seems highly improbable.<br /><br />So, how big is this number with the silly name? Well, first off, if we wanted to address it in the nomenclature of billions and trillions, it would be known as ten thousand trillion trillion trillion trillion trillion trillion trillion trillion. That's a ten-thousand followed by 8 trillions. Or, if you prefer to think of it in terms of current financial events, $1 googol would be about 12,706 trillion trillion trillion trillion trillion trillion trillion <a href="http://www.washingtonpost.com/wp-dyn/content/article/2009/02/17/AR2009021700221.html?hpid=topnews">economic stimulus packages</a>.<br /><br />Now, to answer your question about grains of sand: let's approximate a grain of sand as a cube 1 millimeter across. That would mean that 1000 grains of sand in a row would be 1 meter long. So, one cubic meter would contain 1000*1000*1000, or <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E9" align="center" />, grains. A googol grains, therefore, would take up <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B10%5E%7B100%7D%7D%7B10%5E%7B9%7D%7D=10%5E%7B91%7D" align="center" /> cubic meters of space. And it <i>would</i> really be "space", because this is <i>way</i> bigger than our tiny little planet could accommodate. For comparison, a sphere the size of the Earth (which has a radius of about 6,371 kilometers, or 6,371,000 meters) has, according to <a href="http://en.wikipedia.org/wiki/Volume_of_a_sphere">the formula for the volume of a sphere</a>, a volume equal to <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B4%7D%7B3%7D%5Cpi%20R%5E3%20=%201.083*10%5E%7B21%7D" align="center" /> cubic meters, about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B30%7D" align="center" /> grains of sand, so it would take about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B10%5E%7B100%7D%7D%7B10%5E%7B30%7D%7D=10%5E%7B70%7D" align="center" /> Earth-sized balls of sand to get a googol grains. Alternatively, if you lumped all the sand together into one giant ball, you'd need <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B4%7D%7B3%7D%5Cpi%20R%5E3%20=%2010%5E%7B91%7D" align="center" /> cubic meters, and solving for <span style="font-style: italic;">R</span> gives us a radius of approximately <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%201.34*10%5E%7B30%7D" align="center" /> meters, which is about 140 trillion light years, or 10,000 times bigger than the size of the observable universe. So maybe <i>space itself</i> wouldn't even be big enough to handle such a massive sand ball.<br /><br />Carrying the sand idea a little further, imagine there was a huge interstellar alien for whom the Earth appeared to be as small as a grain of sand is to us. Now, suppose this being lived on a giant planet, the planet Gigantica, as big, relative to the alien, as the Earth is to us. This would imply that the volume of Gigantica was in proportion to the Earth as the Earth is to a grain of sand, about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B30%7D" align="center" /> times as big; so Gigantica itself would be about <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B51%7D" align="center" /> cubic meters in volume. If, in turn, there were a still <i>larger</i> alien for whom <i>Gigantica</i> looked like a grain of sand and who lived on an even <i>larger</i> planet, the planet Humonga, say, then Humonga would have to be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B30%7D" align="center" /> times as big as Gigantica, for a volume of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B81%7D" align="center" /> cubic meters. Now, if <i>yet an even larger</i> alien, from the planet Ginormica, looked down on <i>Humonga</i> as a tiny grain of sand, it would require <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B10%7D" align="center" /> Humonga-balls to comprise a volume of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B91%7D" align="center" /> cubic meters, which you'll recall was the volume of space taken up by a googol grains of our puny Earth-sand. Back here on Earth, <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%2010%5E%7B10%7D" align="center" /> grains of sand is 10 cubic meters, about the volume of a nice 10 meter by 10 meter patch of beach (assuming a depth of 10 centimeters). So on Ginormica, this giant giant giant alien could kick back in a beach chair, sip a giant giant giant beer and build a giant giant giant sand castle out of those googol grains with plenty of sand left over to get in its megashoes.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com6tag:blogger.com,1999:blog-1920088135580776574.post-31374485720020964632009-02-13T23:27:00.000-08:002009-02-24T22:05:06.706-08:00Unexpected Values, part 3And now, the exciting conclusion!<br /><br /><br /><i>Dear Dr. Math,<br />My college administration says that the student-teacher ratio is 5:1. But all my classes have like 30 people in them. What gives? Are they just lying?<br />Sincerely,<br />A Student at a Local College<br /><br /></i>Well, ASAALC, that depends on what you mean by "lying." If you mean, Are they miscalculating their student-faculty ratio?, then the answer's probably "no." These things are public information, and anyone with a calculator can divide the number of students by the number of teachers. (Whether they include non-teaching faculty, like that weird guy who lives in the basement and hasn't taught a class since the 60s, is kind of an ethical gray-area.) However, if the question is, Are they misleading people by reporting a somewhat meaningless statistic?, then "yes." Here's how the magic trick works:<br /><br />It all comes back to the idea of average, and the different things the word "average" means to people. Many people think of "average" as meaning "typical" or "to be expected," a misperception that isn't helped any by the synonym "expected value." So, if a college reports that it has a student-teacher ratio of 5:1, or equivalently, that it has an average class size of 5, people leap to the conclusion that the typical class they'll encounter at the college will have 5 people in it (and how awesome is that?!). However, that may not be the typical experience, and it may even be impossible. Let's take a look at a simple example:<br /><br />If I told you that my girlfriend and I were going to flip a coin and decide whether to have a baby based on the result,* the average number of babies we would have, i.e., our expected number of babies, would be 1/2. But of course, we would be surprised, even shocked, if we actually had half a baby. In this case, the "expected" value is anything but. The value 1/2 just represents the <i>plausibility</i> we associate to the chance of having one baby. If we repeated the experiment many times, the ratio of babies to attempts would converge to 1/2.<br /><br />Similarly, suppose that a tiny school had 10 total students and 2 teachers. So, the student-teacher ratio is 5:1. Now, let's say one of the teachers is that awesome guy who lets you call him by his first name, and the other has really bad dandruff or something, so 9 people register for Class #1 and only 1 person registers for Class #2. The average class size is the average of 9 and 1, which is 5--equal to the ratio of students to teachers, as it should be. However, if I picked a random <i>student</i> and asked him (it's an all boys' school) how many people were in his class, 9 times out of 10 he would say 9, and 1 time out of 10 he would say 1. So the average <i>response I would get</i> would be <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%20%5Cfrac%7B9%7D%7B10%7D*9%20+%20%5Cfrac%7B1%7D%7B10%7D*1%20=%208.2" align="center" />, for a difference of 3.2! In fact, the only way the two numbers can actually be the same is if all the classes are exactly the same size.<br /><br />If you think about it, it makes perfect sense that I'd get someone in a larger class more often in a random poll, since the larger classes <i>have more people in them</i> <span style="font-style: italic;">to get polled</span>. The question, then, is which of these numbers actually represents the "typical" student experience. Since the student-teacher ratio is generally smaller, the schools are happy to just report that and hope you don't notice the difference. What they really <i>should</i> be reporting is something more like the <i>distribution</i> of class sizes. As it is now, if you want to know what the classes are like, you've just got to go see for yourself.<br /><br />Also, what's the deal with the vending machine only having Pepsi?<br /><br />-DrM<br /><br />*For the record, that's not how we do it. We roll a 20-sided die.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com6tag:blogger.com,1999:blog-1920088135580776574.post-47630775408340798162009-02-13T22:23:00.000-08:002009-02-23T01:28:42.483-08:00Unexpected Values, part 2<i>Dear Dr. Math,<br />Is it ever a good idea to play the lottery? My dad says he only buys a ticket when the jackpot is bigger than the odds against winning, but we're still broke.<br />Angelica<br /><br /></i>Dear Angelica,<br /><br />NO!!<br /><br />-DrM<br /><br />P.S.--Here's why:<br /><br />Hopefully, by now you've read <a href="http://doctormath.blogspot.com/2009/02/unexpected-values-part-1.html">my previous post</a> all about expected values. (If not, go read it now; I'll wait here. OK? OK.) Now, the thing your dad is referring to is the fact, which is a fact, that the expected value of the lottery is occasionally greater than $1, the cost of the ticket. Assuming you play the Powerball, which is the most popular lottery in the U.S., your odds of winning the jackpot are 1 in 195,249,054. (If you'd like, I'll show you how to compute that sometime.) So, considering only the jackpot, if the payoff is higher than $195,249,054, the expected value of the lottery, i.e., the jackpot times the probability of winning, would actually be greater than the cost, so it would seem that math <span style="font-style: italic;">is</span> telling us to play. However, this is still not a convincing argument, even putting aside other practical concerns involved in winning the lottery.*<br /><br />The reason comes back to the idea of variance, which I also talked about last time. Just so you can follow along at home, the variance of a simple game like this is computed like so: you take the payoff squared times the probability of winning and subtract the expected value squared. So let's say, for example, that the jackpot was $200 million one week. Then the expected value would be the probability of winning times the jackpot, which is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B195,249,054%7D%20*%20200,000,000" align="center" />, about $1.02. The variance, therefore, is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?200,000,000%5E2%20*%20%5Cfrac%7B1%7D%7B195,249,054%7D%20-%201.02%5E2%20=%20204,866,549" align="center" />. <i>Mamma mia! </i>That's a lot of variance!<br /><br />If you remember from last time, the problem with having too high a variance in a bet is that you typically run out of money before you get to win--the distribution of your winnings/losses is too spread out. So, in effect, if you had unlimited funds (which I know for a fact you don't, Angelica) and you could play the lottery with the same odds every week for several billion years, it might actually be a good investment, because on average you would win back $1.02 for every $1 you gambled, a nice healthy 2% return. However, since you're only going to play it at most a few hundred more times (and hopefully <i>no</i> more after today!), the variance is just too high for you to handle. It's kind of a paradox, really, that the decision to place a particular bet once depends on your ability/plans to bet many times. What we see here is an interesting example of the <i>tradeoff</i> between expected value and variance. Sometimes, depending on your circumstances, it's worth sacrificing a little of one to improve the other. If someone offered you $0.99 for your $1 lottery ticket, for example, I'd recommend you sell, no matter how high the jackpot is.<br /><br />Incidentally, in my opinion, this is how the banking industries and insurance industries make money (or, at least, <i>used to</i> back when they existed). Let's say you're deciding whether to insure your $100,000 house at a cost to you of $1,500. For argument's sake, let's say you have an extra $100,000 in savings that you could use to replace the house if need be, so the only cost is monetary. And let's assume that the chance of your house being completely destroyed is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B100%7D" align="center" /> ; you know it, and the insurance company knows it. So, you're trading a bet (no insurance) with an expected loss of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B100%7D*100,000%20=%20%5C$1,000" align="center" /> but a fairly high variance for a sure-thing loss of $1,500 (if your house burns down, you don't lose anything except the time it takes to file an insurance claim and replace your stuff) and no variance at all. But the extra peace of mind is worth something to you, so maybe you're willing to pay the $500 premium for it. Meanwhile, the insurance company (and the bank that underwrites your policy) is buying millions of these bets, and so their distribution of income is turning into a bell curve, narrowing down pretty much to a fine point centered around a $500 profit per policy. Voilà, everyone wins, but they win more simply by virtue of already being large. (The problem occurs when they start taking that money and doubling down on risky investments, unless they get bail... oh wait, never mind.)<br /><br />-DrM<br /><br />*The diminishing marginal utility of money, the chance of having to split winnings with another person, the amount lost to income tax, and the failure to adjust for inflation, to name a few.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com1tag:blogger.com,1999:blog-1920088135580776574.post-69538660779071691832009-02-13T21:32:00.000-08:002009-02-24T20:11:27.732-08:00Unexpected Values, part 1I've gotten a lot of questions all related to the idea of averages, so I'm going to devote the next 3 posts to discussing different facets of averaging. It'll be like a trilogy, but hopefully one more like <span style="font-style: italic;">Lord of the Rings</span> than <span style="font-style: italic;">Jurassic Park</span>. Stay tuned to find out!<br /><br /><br />Dear Dr. Math,<br /><i>Is there a difference in roulette between betting on black versus betting on a number, like 6? What about betting both at the same time? I usually have more fun betting on black, because my guess is that by betting on black I lose money more slowly, but I'm not sure which is actually better.<br />Sean<br /><br /></i>Dear Sean,<br /><br />My first rule of gambling is don't gamble. (The second rule of gamb.... OK, you get the idea.) But I wouldn't be much of an advice columnist if I told you to do something just because I said so, without helping you understand why. So, maybe after we dissect this gambling question we can talk about why gambling is generally a bad idea. Then we'll talk about why you should sit up straight and why you didn't call me on my birthday.<br /><br />OK, first: why the bets are the fundamentally the same, and second: why they're different.<br /><br />The most rudimentary way of analyzing the quality of a bet is computing what's called its <i>expected value</i>. This is the number you get by multiplying each possible payoff by its probability and adding them all together. In roulette, and most other casino games, both the payoffs and the probabilities are well-known, so the expected value is easy to compute. As a convention, we always compute the expected value for a bet of $1 (my kind of bet), but for you high-rollers who bet $<i>n</i>, you can just multiply the end result by <i>n</i>. In the first example, your bet on black, there are 18 ways to win (the 18 black pockets) and 20 ways to lose (the 18 red pockets and the 2 green ones), and we're assuming that every pocket is equally likely, so the probability of winning is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B18%7D%7B38%7D" align="center" />. If you win, you get $2--your original $1 back plus another one from the house. If you lose, which happens with probability <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B20%7D%7B38%7D" align="center" />, you get nothing but my condolences, which have no cash value. So altogether your expected value is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B18%7D%7B38%7D*2+%5Cfrac%7B20%7D%7B38%7D*0%20=%20%5Cfrac%7B36%7D%7B38%7D" align="center" />, or $0.946. Note that the <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B20%7D%7B38%7D*0" align="center" /> wasn't really doing anything in that calculation, so for simplicity we can skip that part from now on.<br /><br />Now, the bet on 6 has a lower probability of winning but a higher payoff, and as we'll see, the two effects cancel each other out exactly. The payoff for winning a bet on 6 is $36, including your $1 plus $35 of the house's, and the probability of winning is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B38%7D" align="center" />, since there's only the one 6 on the wheel, as in life. Hence, the expected value of the bet is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B38%7D*36%20=%20%5Cfrac%7B36%7D%7B38%7D%20=%20%5C$0.946" align="center" /> again. Those clever French guys in the 18th century managed to design the game of roulette so that almost every bet has that same expected value of $0.946, or 94.6¢. Interestingly enough, in America, there <i>is</i> actually one bet which is worse--the "5 number" bet on 0, 00, 1, 2, and 3, which has a payoff of $7 and a probability of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B5%7D%7B38%7D" align="center" />, for an expected value of <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B35%7D%7B38%7D=%5C$0.921" align="center" />--but since they don't use a 00 in other places, we Americans have that unique opportunity of actually making a worse decision when playing roulette than just playing roulette in the first place.<br /><br />The reason the expected value matters so much has to do with something I'm sure I'll be talking a lot about in future entries, The Law of Large Numbers, sometimes mistakenly called the "Law of Averages." Essentially (and pay attention to the words I emphasize for clues about how people misuse it), the LLN says that if you keep making the <i>same bet</i> over and over again, in the <i>long run</i>, the total payoff <i>divided by the total number of bets</i> will converge to the expected value of the bet. So, both your bet on black and your bet on 6 will pay you off $0.946 per bet on average, assuming you have enough chips to hang around and keep betting. Notice that this is a <i>bad thing</i> for you and a good thing for the house, because the expected value is less than the price to play the game, $1. I remember being surprised when I went to Las Vegas that so many casinos advertise things like "99% payoffs guaranteed," meaning they're guaranteeing that you lose money?<br /><br />Also, it's worth mentioning that you can't improve the situation by sneakily combining multiple bets or betting different amounts or any of the other so-called "systems". Expected value has the property that you can compute the expected values of each part of an overlapping bet, like 6 <i>and </i>black, separately and then just add them together. In the end, a bet of $1 bet gives you back an average of $0.946 until you simply run out of money and go home.<br /><br />The difference, then, between the two kinds of bets has to do with something called their <i>variance</i>, or its square root which goes by the name <i>standard deviation</i>. I'll spare you the formulas (for now), but essentially variance is a measurement of how "spread out" the payoffs of a bet are. So, among all the bets with the same expected value, the bet with the lowest possible variance, which is 0, is the bet where you just hand over your money. The variance gets higher as the payoff gets higher (and the probability of winning gets lower, in order to keep the expected value the same). In the examples above, the bet on black has a variance of about 1.0 and the bet on 6 has a variance of about 33.2, which is substantially larger.<br /><br />What's going on kind of behind-the-curtain is that over time, if you keep placing the same bet over and over, the <i>distribution</i> of your accumulated winnings takes the shape of a bell curve (by the <a href="http://en.wikipedia.org/wiki/Central_limit_theorem" target="_blank">Central Limit Theorem</a>). The mean of that curve is determined by the expected value of each bet (on account of the LLN); how spread out it is depends on the variance. And if you don't want a lot of risk (of losing <i>or winning</i>), you should try to reduce that spread as much as possible. The casino, for example, would prefer that you just hand over the 5.26 cents you were going to lose on average and repeat. Since that wouldn't be much fun for you, they offer a little variance to keep you entertained. But as you rightly point out, too much variance isn't fun either, because you don't get to "win" very often, and so you might not come back to play again. So it's a delicate balance. Personally, I like the strategy of betting the table minimum on black and red simultaneously and drinking as many free cocktails as possible while my chip stack gradually diminishes. But these are personal decisions.<br /><br />Bottom line: you can't beat the house, all you can do is maybe make it take a little longer for them to beat you.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com4tag:blogger.com,1999:blog-1920088135580776574.post-26458586220958141352009-02-11T15:34:00.000-08:002009-02-18T23:40:31.541-08:00"in a hole in the grou31;aadn,m vnatoh424..."<i>Dear Dr. Math,<br />Suppose you have a computer randomly generating all the characters making up the text of </i>The Hobbit<i>. Suppose the computer generates 200 characters per second. Is there a time period after which it becomes probable that the computer has produced the text of </i>The Hobbit<i>? My gut tells me, "No, it would never happen."<br />Consider this. If a sufficiently powerful being wanted to write a 20,000 page history of tea-drinking, it could either<br />(a) produce the book<br />(b) produce all possible 20,000 page long sequences of keyboard characters<br />TolkienFan<br /><br /></i>Dear TolkienFan,<br /><br />Your question is related to the famous <a href="http://en.wikipedia.org/wiki/Infinite_monkey_theorem">Infinite Monkey Theorem</a>, about a room full of monkeys randomly banging on typewriters for all eternity eventually producing the complete works of Shakespeare. (Digression #1: an English major I knew once joked that the monkeys could reproduce the complete works of D.H. Lawrence in a surprisingly short time.) It turns out that the answer is "yes" in some abstract sense, as long as we're careful about what we mean, but the amount of time it would take is far beyond our comprehension--many many orders of magnitude more than the estimated age of the universe. Here's a quick back-of-the-envelope calculation:<br /><br />1.) Let's only consider the characters on a standard typewriter. Let's say there are 50 possible characters (including numbers and punctuation, but ignoring capitalization, say), although it turns out not to matter very much whether there are 50 keys or 50,000.<br /><br />2.) Approximate the length of <i>The Hobbit</i> as 360,000 characters--it's a 320 page book, times a standard 250 words per page, times an average 4.5 letters per word in English.<br /><br />3.) Assume all characters are equally likely to be output by the random generator, be it computer or monkey. (Digression #2: someone actually did this experiment once with real live monkeys and found that the monkeys were extremely overly fond of the letter "s" for some reason. Also, they [the monkeys] really enjoyed urinating on the keyboard.) Assume the characters are probabilistically independent of each other, i.e., knowing what one character is doesn't inform our knowledge of any other one.<br /><br />4.) Now, break up the output of the generator into chunks of length 360,000. For each chunk, the chance of the first character being equal to the first character of <i>The Hobbit</i>, which is "i", is <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B50%7D" align="center" />. By the assumption of independence, the chance of the first two characters being correct, "in", is <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B50%7D%20*%20%5Cfrac%7B1%7D%7B50%7D" align="center" />, or <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B50%5E2%7D" align="center" />, and so on. So the chance of the whole block of 360,000 characters reproducing the entire book is <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B50%5E%7B360,000%7D%7D" align="center" />, which is about <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B-611629%7D" />, an astronomically small number. Let's call that number <i>p</i>. The probability of failing to produce <span style="font-style: italic;">The Hobbit</span> in each chunk is (1-<i>p</i>).<br /><br />5.) If we imagine doing this process a second time, the probability that we'd fail both times is <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281-p%29%5E2" />, because of the independence property again. In general if we repeat it <i>n</i> times, the probability of failing all <i>n</i> times is <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cinline%20%281-p%29%5En" />. Now, and this is the key, (1-<i>p</i>) is <i>extremely</i> close to 1 but isn't actually <i>equal</i> to 1. So if we take <i>n</i> large enough, the probability of failure will eventually converge to 0; meaning that <i>with high probability</i> we will have at some point succeeded in producing <i>The Hobbit</i>. It's not clear what's meant by an event being "probable," but if, say, we wanted there to be a 95% probability of success, meaning a 5% chance of failure, we would need <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%281-p%29%5En=1-.95=.05" />. So to solve for <i>n</i>, we can take logarithms of both sides and divide by <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?log%281-p%29" />, to get that <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?n=%5Cfrac%7Blog%28.05%29%7D%7Blog%281-p%29%7D" align="center" />, which is on the order of <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7Bp%7D" align="center" />, or about <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B611629%7D" /> blocks of characters.<br /><br />6.) To give you a sense of how incredibly large this number is, if we generated 200 characters per second, as you say, then 360,000 characters would take 30 minutes, so we could produce about 48 new blocks every day, or 17650 per year, on the order of <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B4%7D" />. This would mean that our <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B611629%7D" /> blocks would take about <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B611625%7D" /> years. For comparison, the universe is estimated to be around 13 billion years old, approximately <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B10%7D" />, so you would need <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?10%5E%7B611615%7D" /> ages of the universe to have completed the task with 95% probability. Even if you had every atom in the universe working in parallel for the lifetime of the universe, you'd barely make a dent. Surely all our protons would have decayed into nothingness long before you even got to the part in the book where the trolls get turned to stone.<br /><br />I suppose the moral of all of this, if you believe in that sort of thing, is that sometimes the mathematical consequences of our assumptions can far outstrip our intuition, especially when it comes to events that are exceedingly rare or numbers that are exceedingly large. So in a sense your "gut feeling" is right as far as any practical considerations go. Jorge Luis Borges wrote a story about this called <i>The Library of Babel</i>, which deals exactly with your scenario of a powerful being producing every possible book of a certain length. It's true that among such a vast library, there would be a comprehensive volume of the history of tea-drinking. However, there would also by necessity be an incredible number of <i>false</i> histories, every possible one in fact, as well as fraudulent cures for cancer and incorrect predictions for the next 100 World Series winners, and, importantly, there would be <i>no way</i> to distinguish the fake from the genuine. So it's back to option (a) I'm afraid--just write the book.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com6tag:blogger.com,1999:blog-1920088135580776574.post-7218461909967995252009-02-09T22:56:00.000-08:002009-02-18T23:46:30.927-08:00In the Big Apple, I prefer Honeycrisp.I've been called out by Short Round over at <a href="http://www.alt85.com/">alt85</a> again, concerning <a href="http://www.nytimes.com/2009/02/08/fashion/08halfmill.html?_r=1">a recent article</a> in <i>The New York Times</i>:<br /><br /><i>The article included one piece of information with direct relevance to the little people: "a new study from the Center for an Urban Future, a nonprofit research group in Manhattan, estmates that it takes $123,322 to enjoy the same middle-class life as someone earning $50,000 in Houston." [Tugs nervously at collar.] And since the average median* per-capita income in Houston in 1999 (according to houstontx.gov) was $20,101, and since the Urban Future people's figures would suggest that $20,101 in Houston is worth less than $49,578 in New York (for reasons that the newly returned Dr. Math could surely explain better than I,** unless he disagrees, in which case I challenge him to a duel)... Well, New York is f**kin' expensive. Not news.<br />Short Round</i><br /><br />Sir, I accept!<br /><br />So, I'm not generally opposed to the conclusion that New York ¢ity is an expensive place to live. (God knows I could use an extra $500K a year to spend on all those things that I've heard the city is supposedly famous for but that I'm too poor to experience.) The authors of this article seem to be basically assuming that conclusion from the beginning. In a sense, all this "news" piece is even claiming to do is put some quantitative weight behind a stereotype that we've all pretty much agreed on already. But since it involves numbers, I can't resist picking apart their methodology a little. The Devil, as always, is in the details:<br /><br />First off, I had to do some considerable digging to even get to the original source of this email-forward-ready statement that $50,000 Houston dollars is equivalent to $123,322 Dollars New York ($NY). The <a href="http://www.nytimes.com/2009/02/08/fashion/08halfmill.html?_r=1">Times article</a> cites a <a href="http://www.nycfuture.org/content/articles/article_view.cfm?article_id=1233&article_type=0">report</a> from the oddly-named Center for an Urban Future, which used a <a href="http://cgi.money.cnn.com/tools/costofliving/costofliving.html">cost-of-living calculator</a> from the CNN (yes, CNN) website, which had as <i>its</i> source material a survey done by the <a href="http://www.coli.org/compare.asp">Council for Community and Economic Research (C2ER)</a>, in which they hired surveyors to sample prices from various cities they wanted to compare (more on that later). The (Center for an Urban Future) report is a 52 page document entitled "Reviving the City of Aspiration" about ongoing trends in the middle class of America, particularly in New York. One problem right off the bat is that the authors never precisely define what they mean by "middle class". They write, "In this study, we use ['middle class'] to indicate those who own homes or who have the prospect of becoming homeowners, earn at least in the middle quintile of wages and enjoy a modicum of economic stability." They then go on to wax poetic for a while about the important contributions middle class Americans make to society (including "providing the customer base for a wide mix of businesses across the city," adding to New York's "street life" and, somewhat circularly, owning homes). But setting aside the logical hiccup for a minute, it's still not clear from the definition who exactly qualifies as middle class. Rather, it's somewhat clear what the <i>minimum</i> standards are for membership--you have to own a home or have "the prospect" of doing so, earn "at least in the middle quintile of wages," which is sloppily phrased but I'm guessing means you have to earn more than at least 40% of people in the area, and have "a modicum" of economic stability, which they explain as being able to consistently pay your bills--but there seems to be no clear <i>maximum</i> standards. For example, would someone earning $250K per year in the <a href="http://pubdb3.census.gov/macro/032007/hhinc/new06_000.htm">98th percentile</a> be considered middle class, assuming he owned a house and could pay his bills (for monocle cleaning and storage)? Maybe, by the authors' definition, but certainly not by mine.<br /><br />Now, if we trace this comparison-of-cities data all the way back to its source, the C2ER survey, we find an interesting disparity. The basic idea of the survey was to follow some sample of people around and make a log of the prices of all the things they paid for--clothes, food, entertainment, travel, etc.--to get a measurement of the relative cost of living in different places. However, in <a href="http://www.coli.org/surveyforms/colimanual.pdf">the guidelines</a> for the survey participants, it says specifically that the authors are not looking for middle class consumers to follow around (they changed their original survey language because "it was too easily confused with 'middle class,' which isn't the same thing at all"); rather, they focus on a population they call "moderately affluent professional and managerial households", who are characterized as "a household consisting of both spouses and one child (for pricing apartments, it is assumed that the couple is childless or the individual is single)" with the criteria that "both spouses hold college degrees; at least one has an established professional or managerial career," and, most significantly, "household income is in the <i>top quintile for the area</i>" (emphasis mine). For most cities, they say that the household annual income should be "between $70,000 and $100,000;" however, as they say, "the appropriate income range will be higher in traditionally high-cost places like New York..." So our monocle-polishing Uncle Moneybags the hedge fund manager would be included in the survey.<br /><br />What's the real problem with this? Apart from the fact that we've gotten, <i>explicitly</i>, pretty far away even from the ill-defined "middle class" of the Urban Future report, upon whose homeowning backs the street life of the city rests, we've also gotten into some shaky statistical territory, where I believe we're not even comparing apples to apples anymore, but rather something like apples to <i>different kinds</i> of apples (Fuji to Jonagold), to learn all about oranges. And also the middle class. I don't have any hard data to back me up here, but my sense from having lived in New York for a little while now is that, due to the presence of so many ultra rich celebrities and financiers, the shape of the distribution of incomes here is more heavily slanted towards the top ("fat-tailed," as they say), meaning not only is the average income higher, but the <i>relative</i> difference between the top 20% and those of us way down in the middle is considerably greater than in other U.S. cities. In pictures, the graph of incomes in New York is more like this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_zy58RlQeD60/SZEt81eZ0dI/AAAAAAAAADg/AHG3YtdfbNI/s1600-h/chart104.gif"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 477px; height: 224px;" src="http://4.bp.blogspot.com/_zy58RlQeD60/SZEt81eZ0dI/AAAAAAAAADg/AHG3YtdfbNI/s320/chart104.gif" alt="" id="BLOGGER_PHOTO_ID_5301068759561785810" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />than this:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.mathnstuff.com/math/spoken/here/2class/90/statpb.gif"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 266px; height: 289px;" src="http://www.mathnstuff.com/math/spoken/here/2class/90/statpb.gif" alt="" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />In the latter case (Houston), it doesn't take much more income to put you in the top 20%, but in New York, it takes considerably more. So, the potential gap in luxury lifestyles is exaggerated, and as a result, more especially luxurious opportunities open up for those who can afford them. There really just isn't a Houston equivalent of buying a $150 truffle and <i>foie gras</i> burger at Bistro Moderne or paying $75K per year for a personal driver or all the other outrageous things the <i>Times</i> article mentions.<br /><br />Which all brings me all the way around to the point: that measuring what it costs to uphold a "standard of living" is an extremely difficult and subtle problem, one which requires a great deal of precision and care. And it may not really be possible when the markers of that standard vary so greatly from place to place. New York is a pretty special town with no real equivalent anywhere else in the U.S., and in fact, based on the ways we live our lives, renting instead of owning, riding the subway instead of driving, eating fancy burgers made out of goose liver... it may not even make sense to think of it as part of the U.S., despite its importance as a cultural hub.<br /><br />Like the old song goes, "New York, New York, it's a pretty special town with no real equivalent anywhere else in the U.S., and in fact, based on the ways we live our lives, renting instead of owning, riding the subway instead of driving, eating fancy burgers made out of goose liver... it may not even make sense to think of it as part of the U.S., despite its importance as a cultural hub."<br /><br />-DrM<br /><br /><br />P.S.--To Short Round: oddly enough, it seems that "average median" is correct there. In the report, they averaged together the median incomes of the various ethnic groups in the suburbs of Houston, presumably with some weighting. Hence, average median income. Weird.drmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com3tag:blogger.com,1999:blog-1920088135580776574.post-37883730364819856642009-02-07T21:27:00.000-08:002009-02-18T00:05:10.006-08:00The Scales of Justice<i>Dear Dr. Math,<br />I don't mean to be crass, but what are the chances that Ruth Bader Ginsburg is going to die during Barack Obama's presidency? I read that the average lifespan of an American is 78 years, and she's only 75 but now she has cancer. Also, some of the other judges are old, too. How many appointments is he probably going to have to make in the next 4 years?<br />Sincerely,<br />ScotusLover728</i><br /><br />Dear ScotusLover,<br /><br />With the recent news about Justice Ginsburg being diagnosed with pancreatic cancer, this is a topic on the minds of a lot of people. Of course, we're all hoping for the best for her, but the issue of Supreme Court appointments has ramifications far beyond our wishes for her health. When one vote can make the difference in <a href="http://en.wikipedia.org/wiki/Bowers_v._Hardwick">who can stick what in whose what</a> or <a href="http://en.wikipedia.org/wiki/Ledbetter_v._Goodyear_Tire_%26_Rubber_Co.">whether little old ladies deserve equal pay for equal work</a> or even <a href="http://en.wikipedia.org/wiki/Bush_vs_Gore">who the president was for the last 8 years</a>, the news that one of the justices has a potentially deadly illness gets everyone's attention.<br /><br />First off, we should dispense with the whole "average lifespan" argument, which is largely irrelevant here. The statistics you've probably seen quoted are the expected (in the sense of average, or mean) lifespan of someone born today. It includes the effect of a fair number of people dying young. As someone gets older, his/her expected lifespan increases, because we have to incorporate into the calculation the fact that he/she is still alive. The relevant numbers for these things can be found in what are called <a href="http://en.wikipedia.org/wiki/Life_table">life tables</a>, which are tools that actuaries use to figure out what your grandmother's life insurance premiums should be, etc. However, these are still just averages over large swaths of the population and they don't take into consideration any more particular information we might have about someone.<br /><br />So, in Justice Ginsburg's case, the more relevant number is the <a href="http://en.wikipedia.org/wiki/Mortality_rate">mortality rate</a> for her particular form of pancreatic cancer, given the stage at which it was diagnosed. And unfortunately, the numbers are not particularly good. One number that the news media seems to have latched onto is that only 5% of people diagnosed with pancreatic cancer survive for 5 years after the diagnosis. Again, <a ref="http://doctormath.blogspot.com/2008/10/we-all-use-math-every-day-or-do-we-yes.html">I'm not a doctor</a>, but from Googling around I've discovered that a big part of the reason pancreatic cancer is so deadly is that it frequently goes undetected until it's already in a fairly advanced stage. So, the fact that her cancer was caught relatively early should work to her favor. I think the right number to be considering here is the survival rate for pancreatic cancer in its earliest stage, which is <a href="http://health.usnews.com/articles/health/cancer/2009/02/05/6-things-you-need-to-know-about-pancreatic-cancer.html?s_cid=related-links:TOP">something like 35% after 5 years</a>.<br /><br />It's important to remember, also, that these statistics just reflect the potential of dying from the disease (assuming they're measuring <a href="http://www.cancer.org/docroot/CRI/content/CRI_2_2_4X_How_Is_Pancreatic_Cancer_Treated_34.asp?sitearea=">relative survival rates</a>). That is, the number 35% is supposed to represent the percentage of people diagnosed with cancer who are still alive after 5 years given that they <i>would have been alive anyway</i>. Of course, it's a difficult thing to measure, but it means we should include the fact that Ruth is 75 years old and that just being 75 <i>itself</i> has a 5 year survival rate of <a href="http://www.ssa.gov/OACT/STATS/table4c6.html">83%</a> (for American females). To determine that overall survival rate, I took the number of women alive at age 80 and divided by the number alive at age 75.<br /><br />In the final tally, then, my best guess for Justice Ginsburg's chances of living out the next 5 years would be the chance that any 75-year-old woman would live 5 more years times the relative survival rate for someone with an early diagnosis of pancreatic cancer, that is, (.83)*(.35) = .29, or 29%. There's no accounting for determination, the will to live, or hatred of Antonin Scalia, however. The lesson to draw from all of this is that the more information you have about a particular person, the more precisely you can fine-tune the analysis of his/her situation, but also the less data you have to draw conclusions from. Really, what we'd like to know is the 5 year survival rate for being Ruth Bader Ginsburg, but there's only been 1 known case in history.<br /><br />To answer your other question about likely Supreme Court appointments, I could go through each remaining justice and compile mortality tables for each based on his particular lifestyle and risk factors, but maybe I should leave that as an exercise for the reader. Personally, I find all this kind of creepy. I will say that Justice Stevens is 88, and the 5 year survival rate for an 88-year-old American male is <a href="http://www.ssa.gov/OACT/STATS/table4c6.html">36%</a>, or about the same as Justice Ginsburg's cancer diagnosis. Of course, there can be many reasons for a justice to leave the court besides death, as well. Of the 101 Supreme Court justices who have left the court, only 50 have done so by dying. The remaining 51 resigned or retired, presumably before they died, unless they were pulling a <a href="http://en.wikipedia.org/wiki/Jeremy_Bentham#Auto-icon">Jeremy Bentham</a>. It appears that having an extremely silly name is not a <a href="http://en.wikipedia.org/wiki/Lucius_Quintus_Cincinnatus_Lamar_%28II%29">risk</a> <a href="http://en.wikipedia.org/wiki/Felix_Frankfurter">factor</a>.<br /><br />A rough estimate for the average rate of appointments can be gotten by dividing the total number of appointments, 110, by the age of the Supreme Court, which is 220 years young (happy birthday, Supreme Court!). So that's a rate of about half a justice per year, meaning Barack Obama will have to appoint an average of 2 justices for each term he serves as president.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-82532824902442899992008-10-07T23:01:00.000-07:002009-02-17T23:59:18.392-08:00Rods to the Hogshead<span style="font-style: italic;"><br />Dear Doctor Math,<br />Should I buy a Prius or a Honda Civic? At the Toyota dealership they told me the Prius pays for itself in gas savings, but I don't trust them.<br />Thanks,<br />Deke</span><br /><br />Well, Deke, it's good to be skeptical. But let's see if we can crunch the numbers and settle this for ourselves without having to trust a car salesman to figure it out for us.<br /><br />First, we'll have to make some assumptions about the costs of the various things in question and the ways that you're planning to use your car, whichever one you get. All of the numbers I'm about to quote came from the <a href="http://fueleconomy.gov/">EPA's fuel economy website</a>. Now, I don't know anything about you, but I'll go ahead and assume you drive about 15,000 miles per year, like the average American does. (Someday I'll write about the difference between "average" and "typical," but we'll table that discussion for now.) Of that 15,000 miles, I'll assume that approximately 55% is "city" driving and the other 45% is "highway," again in keeping with the average. So that works out to 8,250 miles in the city and 6,750 miles on the highway. If you're involved in a lot of cannonball runs, you can adjust accordingly.<br /><br />According to the EPA's latest numbers, the 2009 Honda Civic gets 25 miles per gallon in the city, 36 highway. So, every year you would use 8,250/25, or 330, gallons of gas in city driving and 6,750/36, or 187.5, gallons on the highway. You total volume of gas used per year in the Civic would be 330 + 187.5, or 517.5 gallons.<br /><br />Due to its greater efficiency in stop-and-go traffic, the Toyota Prius gets 48 miles per gallon in the city and 45 on the highway. Therefore, the total amount it guzzles per year is 8,250/48 + 6,750/46, or 321.8 gallons.<br /><br />Now, gas prices are hard to predict, but let's guess that over the lifespan of your car, gas will cost an average of $4.00 per gallon (in 2008 dollars). That seems like a reasonable projection given the way prices have historically risen. So that works out to 517.5*4, or $2,070, per year for the Civic and 321.8*4, or $1,287, for the Prius. Every year, that means you save $783 by driving the Prius.<br /><br />According to the manufacturers, the suggested retail price for the Civic is $16,205; for the Prius it's $22,000. These prices assume a basic package; probably, any extras you might want, like ground effects or those things that make it jump up and down, would cost about as much for either car. The difference in price, therefore, is $5,795, which would take 5,795/783, or about 7.4, years to pay off in gas savings. Of course, if gas goes up even more, say to $5 per gallon, that number would come down to as little as 6 years.<br /><br />Either way, it seems like a fairly long time, but not outside the realm of possibility. I couldn't find any good numbers here, but people I know who own cars seem to get a new one about every 5 years. Maybe you hold to your cars a little longer, Deke, or maybe there might be other things about driving a Prius that appeal to you, I don't know. But strictly in terms of the gas savings, it doesn't seem to quite be worth it, although it's close. The market seems to have done a pretty good job sorting out these relative prices.<br /><br />An interesting side-note here is that the <span style="font-style: italic;">marginal</span> gas savings (that is, the money saved per every additional mpg) go down as the cars get more efficient. For example, doing the same calculation as before, we can see that an SUV that gets 10 miles per gallon costs $1,000 more per year in gas than one that gets 12 miles per gallon. So, the more important choices may not be between pretty good and very good, but between bad and very-slightly-less-bad.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com5tag:blogger.com,1999:blog-1920088135580776574.post-14707405313941227212008-10-05T13:55:00.000-07:002009-02-18T00:05:44.506-08:00Poll Position<i>Dear Doctor Math,<br />I saw a poll today that said Obama is up 7 percentage points in Ohio with a margin of error of 4%. So does that mean he could actually be losing there? Also, how can they come up with these numbers just by asking a few hundred people?<br />A Concerned Citizen<br /><br /></i>Those are good questions, ACC, and they're related. To answer them, I should talk a little about how polling works and what the various numbers mean. First off, polling is an imperfect attempt at predicting the future. No one knows for sure what's going to happen on election day, and sometimes (see Florida in 2000) it's hard to figure out what <i>did</i> happen even after the election. But polls are our best guess, and usually they do a pretty good job.<br /><br />To conduct a poll, a news agency like CBS or Reuters or a public opinion firm like Rasmussen gets a staff of questioners to each call a handful of people and ask them their opinion on things, like how they're planning to vote. Since it takes time and money to make the calls, the pollsters typically limit themselves to something like a few thousand people. Of course, a lot of people (like young people who don't have landlines and probably vote Democratic, but never mind...) don't answer, so by the time the pollsters compile all their data together, they've got maybe 1000 quality responses to go on. From here they try to figure out what the remaining millions of people in the state or country are thinking, and then they report that information, thereby influencing the way people think, but that's another story.<br /><br />So, the first question is, how do they know they didn't just ask all the wrong people? And the answer is they don't know for sure, of course, but if their methods are sound they can say with a reasonable degree of certainty that their polling numbers reflect the larger population. Think of Mario Batali tasting a single spoonful out of a pot of marinara sauce to see if it needs more oregano. Of course, it's possible he just got the most oreganoed spoonful in the whole pot, but if he's done a good job of stirring it up beforehand, he can be reasonably sure that his sample was <i>representative</i> of the distribution of the whole. But it would still be embarrassing for him to be wrong, so it would be nice to at least have some idea how <i>much</i> of a risk he was taking or maybe if he should taste it again.<br /><br />That's where the "margin of error" comes in. The error that pollsters give is an indication of how sure they are that the sample they chose is a reasonable reflection of the population at large. For reasons I hope to get into someday (involving <a href="http://en.wikipedia.org/wiki/Central_limit_theorem">The Central Limit Theorem</a>), the pollsters assume that the "true" value of the thing they're estimating follows a bell-shaped curve centered around their estimate. So, if they're trying to figure out how many people in Ohio are going to vote for Obama, they take the results from their poll (49% in the latest <i>Columbus Dispatch </i>poll) and say that the actual percentage of people planning to vote for Obama has a <i>probability distribution</i> forming a bell-curve centered around 49%. That means they can actually quantify the probability that their estimate is off by any given amount. The <i>margin of error</i> is the amount of deviation it takes before the pollsters can say with 95% probability that the true value is within that much of the estimate. They pick 95% mostly out of convention and the fact that it's easy to compute. Here, the key factor is the number of respondents--a rough formula for the margin of error (at the 95% level) for a sample of N people is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B0.98%7D%7B%5Csqrt%7BN%7D%7D" align="center" />, which for 1000 people comes out to be about 0.03, or 3%. They might occasionally bump it up to 4% just to be extra sure.<br /><br />Now, to answer your question, does this mean that McCain could actually be ahead? After all, the 7 point difference is less than twice the margin of error, so if we add that much to McCain and take it away from Obama, it does put McCain on top. It's possible, as I mentioned above, that the pollsters just asked enough of the wrong people to skew the numbers. In fact, if you've been paying close attention to what I said about the true value having a bell-curve distribution, you might have noticed that actually the situation is in some sense <i>worse</i> than just that. That 4% number is just the cutoff for the 95% confidence interval; it could be (with about 5% probability) that the poll is off by even <i>more</i> than 4%. Should we just throw up our hands and quit?<br /><br />The important thing to remember here is that the margin of error isn't the end of the story. The bell-shaped curve which gave us the error calculation also shows us that it's more likely than not that our estimate is close to the truth. So again, we can quantify the degree of certainty that we have in estimating the difference between Obama's percentage and McCain's. Using a formula (that I admit I had to look up), we can compute that the <i>standard error</i>, a measure of how spread out the distribution is, in that estimation of the 7% Obama-McCain gap in Ohio is about 0.03. That means the bell-curve is pretty narrowly distributed around the guess. According to the bell-curve distribution, this gives a probability of about 99% that Obama is "truly" ahead in Ohio.<br /><br />So, yes, even if those polling numbers are correct, it is possible that McCain's ahead, but I wouldn't bet on it (unless someone was giving me greater than 100-to-1 odds).<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-92110096073161493842008-10-04T14:55:00.000-07:002009-02-18T00:05:25.239-08:00Fallacy of the Week: The Base Rate Fallacy<span style="font-style: italic;">Dear Doctor Math,<br />If a doctor tells you that you tested positive for something and that the test is 99% accurate, does that mean you have a 99% chance of having the disease? Just curious.<br />MTG<span style="font-style: italic;"><span style="font-style: italic;"><br /></span></span></span><br />I should probably begin by reiterating that I'm not actually a doctor, at least not <span style="font-style: italic;">that </span>kind (see earlier post), so please don't take what I'm about to say as medical advice. But basically the answer is no, or at least you can't tell without further information. You see, what your doctor may have omitted telling you is the <span style="font-style: italic;">base rate</span> for the disease in question, that is, the general probability of having the disease without the extra information that you tested positive for it. If that base rate is really low, even a very accurate test isn't a strong indicator of having the disease. Let me illustrate with some numbers:<br /><br />Let's say that, on average, 1 out of every 1 million people suffers from <a href="http://en.wikipedia.org/wiki/Psychogenic_dwarfism">psychogenic dwarfism</a>. So, for this condition your base rate is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B1,000,000%7D" align="center" />, or .0001%. Now, you go in for a physical and the doctor says that you tested positive for this debilitating condition, and that the test is 99% accurate. There's some room for interpretation as to what that means exactly, but let's take it to mean that (1) if you actually have the condition, you'll definitely test positive, and (2) if you don't have the condition, there's a 99% chance you'll test negative. So, what are your odds? Well, imagine that 100 million people go in for tests. We know that about 100 of them will actually be psychogenic dwarves. Of the remaining 99,999,900 people, however, about 1% will test positive even though they don't have it. So that means 999,999 <i>false positives</i> compared to only 100 true positives. The total number of people testing positive is 100 + 999,999, or 1,000,099, and only 100 of them actually have the disease. Since all you know is that you tested positive, you don't know which of these people you are, so your chance of actually being positive is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B100%7D%7B1,000,099%7D" align="center" />, about 1 in 10,000, or .01%. Bottom line: it's still <i>very</i> unlikely that you have the condition, even though you tested positive for it and the test is very accurate, so don't go out and sell all your normal-sized clothes just yet.<br /><br />The basic rule here is always to consider the number of true positives relative to the total number of people who <span style="font-style: italic;">would</span> test positive, true or false. What we've seen in this example, and what frequently turns out to be the case, is that even a test that sounds like a sure thing can end up producing <i>way</i> more false positives than true ones, just because there aren't that many true positives out there to discover. (For another example, ask the Department of Homeland Security about their terrorist-detecting techniques.) I think part of the problem here is that we're dealing with numbers that we don't have much intuitive grasp for. I mean, "one in a million" basically means it won't ever happen, right? But "99 percent accurate" means it <i>must</i> be true. So how do you decide? The nice thing about math is that can't get bullied around by intimidating sounding numbers like these; it just puts them in their relative place. Remember, probability is ultimately all about information, and it should take a <i>lot</i> of evidence to convince us of something extremely unlikely.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-63006270899323884842008-10-02T21:09:00.000-07:002009-02-18T23:42:54.431-08:00Boys and/or GirlsHere's a classic probability puzzler that's been floating around in the aether recently. The particular question came from my good friend Short Round over at <a href="http://www.alt85.com/">www.alt85.com</a>:<br /><br /><span style="font-style: italic;">Dear Doctor Math,<br />You know that a certain family has two children, and that at least one is a girl. But you can't recall whether both are girls. What is the probability that the family has two girls? I stole this question from the infernet.</span><br /><span style="font-style: italic;">SR<span style="font-style: italic;"><br /><br /></span></span>First let me give the answer, and then I'll talk about why it sounds wrong.<br /><br />The answer is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B3%7D" align="center" />. Let's call the two kids Ashley and Whitney (could be boys' or girls' names, get it?); there are initially four possibilities for the genders of (Ashley, Whitney) without the extra restriction that one has to be a girl. They are: (boy, boy), (boy, girl), (girl, boy), (girl, girl). Assuming that each kid is a boy or a girl with equal probability and that the genders of the kids are independent from each other, each pairing has probability <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B4%7D" align="center" /> because we have no information to prefer one over the others. Now, with the added information that at least one is girl (we saw some moisturizer in the bathroom but we don't know whose it is), we can eliminate the (boy, boy) possibility. However, we still have no information to indicate that any one of the remaining three pairings is more likely than any other, and so the probability of each of them is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B3%7D" align="center" />, given the new information. Thus, the chance of there being two girls is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B3%7D" align="center" />.<br /><br />Now, as I mentioned before, this is a classic brainteaser, which really just demonstrates the weird counterintuitive things that can happen when you ask for the probability of an event given some unusual information. Most people balk at the idea of there being a <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B2%7D%7B3%7D" align="center" /> chance of a kid being a boy, because they're so conditioned to think of the odds always being equal, but since probability is an expression of the <span style="font-style: italic;">information</span> you have about some event and the consequences of said information, it's entirely possible to construct bizarre examples like this one (it's hard to contrive a scenario where you would know just that one of the kids is a girl but not know which one). Strangely, if you know the <span style="font-style: italic;">oldest</span> child is a girl, then the probability of the other being a girl goes back up to <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B2%7D" align="center" />.<br /><br />To show what would happen if you took this a few steps further--let's say you had a whole busload full of 10 people with androgynous names and you didn't know the genders of any of them. To help us tell them apart, let's refer to each by his/her seat number, 1 through 10. There are two possibilities for each person, and they're independent of each other, so to start off with, there are <img align="center" id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?2%5E%7B10%7D" />, or 1024, possible configurations for the gender line-up, all equally likely. Now, suppose you find Zac Efron posters in the duffel bags of nine out of the ten people, so you know there are at least nine girls but somehow you don't know who they are. Given that information, you can eliminate all but 11 remaining possibilities: either they're all girls, or there's one boy, and he's sitting in one of 10 possible seats. Again, we have no reason to think one of these outcomes is more likely than any other; thus, the resulting probability that you actually have a bus full of all girls is <img id="equationview" name="equationview" onload="processEquationChange()" title="This is the rendered form of the equation. You can not edit this directly. Right click will give you the option to save the image, and in most browsers you can drag the image onto your desktop or another program." src="http://latex.codecogs.com/gif.latex?%5Cfrac%7B1%7D%7B11%7D" align="center" />.<br /><br />Like puberty, probability can be scary sometimes, but that's just life.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com0tag:blogger.com,1999:blog-1920088135580776574.post-56116230936911473032008-10-02T11:06:00.000-07:002009-02-17T23:55:19.481-08:00We all use math every day. Or do we? Yes.Hi and welcome to Ask Doctor Math, the warm cozy corner of the Internet where anyone with a math question can pull up a virtual seat, grab a mug of hot virtual cocoa, and sit by the glowing virtual fire of knowledge as I attempt to answer questions mathematical. I created this blog as a forum for people young and old to clear up some lingering misconceptions, bring fuzzy notions a little more into focus, and, with luck, add a few more tools to the toolbox of ideas they use to make sense of the world. I may not be <i>that</i> kind of doctor, but I'd like you to think of me that way anyway--the friendly old small-town physician you can come to with anything, from questions about birth control or that rash on your back to home remedies for a colicky baby or a toothache; hell, I'll even help birth a foal, if that's what you need. (Note: the preceding was just an extended metaphor.) I hope to be your guide on a journey of mathematical discovery. So please, come on in and make yourself at home. Take off your virtual shoes if you like, or don't, it's up to you.<br /><br />To kick things off, I thought I'd reply to a question that seems to be on a lot of people's minds these days regarding the alleged difference between the "mathematical world" and the so-called "real world."<br /><br /><i>Dear Doctor Math,<br />A guy on TV keeps telling me that "We all use math every day." Is that really true? Give several dozen examples.<br />Sincerely,<br />Dr. Math (you)<br /></i><br />Before even beginning to scratch at the surface of that question, I think we should talk a little about what we mean by "math" exactly. For a lot of people I've talked to, "math" is basically synonymous with "numbers." So, in that sense, yes, we probably do all encounter math every day, telling time, riding on a bus, dialing a phone, etc. If we didn't have numbers, we'd be forced to use little pictures of things, so it's convenient to have a shorthand. But a lot of that could only be superficially described as mathematical.<br /><br />Perhaps a little more tangible application of math is in <i>quantitative</i> <i>reasoning</i>--the kind of thing we're forced to do a lot in our capitalistic society ("How much is 30% off $80?", "How do I split a $32 dinner bill 3 ways?", "Which is a better value, a 2'x3' rug for $25 or a 4'x6' rug for $60?", "How long will it take me to go 90 miles if I average 55 miles per hour?", "How many fluid ounces are in 2.5 liters?", etc.). The main tools we generally use here are fractions, percents, ratios, and basic arithmetic--addition, subtraction, multiplication (and its evil stepchild division). A lot of the time we just ballpark it, though, and our powers of guesstimation are more or less adequate to get us through a typical day. When it's really important that we get something right, we outsource the job to a computer, cell phone, calculator, or cash register. So in that sense, I'd say we use math, but we could probably all stand to be a little better at it. At any rate, there's not much genuine thinking involved.<br /><br />Probably what the writers of <i>Numb3rs</i> had in mind, though, is the notion (championed by high school math teachers everywhere), that even when we're not explicitly dealing with numbers (or numb3rs), we frequently use our powers of <i>analytical reasoning</i> in a way that could be broadly be considered mathematical. Our toolkit here includes such things as deductive logic ("Just because I said that dress doesn't make you look fat doesn't mean you don't.", "Every girl knows someone who likes everyone else more than her."), elementary hypothesis testing ("My doctor said the test is 90% accurate; how concerned should I be?", "I guess there could be an innocent explanation for why that guy would be running down the street carrying a TV at 2am." ), and management of risk ("What does it mean that my birth control method is 99.9% safe?", "The weather forecast calls for a 30% chance of rain; should I bring an umbrella or not?"). I would also include in this category some basic optimization problems, like "How do I fit all these boxes in the car?", "Is this couch too big to fit through my hallway?", or "Should I park here or look for a better spot?". Again, these questions don't really tap our quantitative skills so much as our rational/logical thinking skills. A lot of it is intuitive, but a lot can be trained by thinking our way through other rational/logical problems. At its core, that's what math is really all about--practice for the kind of problem-solving that's required of us to navigate a sometimes complex and baffling world.<br /><br />And like it or not, we are being subjected to persuasion of an increasingly mathematical bent. News reports like clockwork regularly tell us what activities or foods might be correlated with cancer or aging. As we move into political crunch-time, we are barraged almost daily by statistical arguments about the state of the nation and who might be to blame for what, as well as the frequent polling results with their "margins of error" and what this means for the so-called "electoral math." The present crisis on Wall Street has, among other things, shown the hazard in trusting all of our quantitative risk-management to a few experts, whom we treat like the high priests who are the only ones allowed into the holy sanctuary. If we're afraid to take responsibility for our role in distinguishing mathematical argument from fallacy, we will only get more and more manipulated by those to whom we yield that authority.<br /><br />In other words, we may not use math every day, but it sure as hell uses us.<br /><br />-DrMdrmathhttp://www.blogger.com/profile/17936175968300765200noreply@blogger.com1