The Law of Large Numbers and the Gambler’s Fallacy


The Law of Large Numbers

edited by John Healy

This theorem is a fundamental element from probability theory. The law is basically that if one conducts the same experiment a large number of times the average of the results should be close to the expected value. Furthermore, the more trails conducted the closer the resulting average will be to the expected value.
 
This is why casinos win in the long term. Even with a slight benefit of the odds in the game, in the long term the results of all the bets and chances will reflect the odds. Sure there is variation, sometimes you win, yet on average with many attempts, the house wins.
 
The experiments or trials are from the same setup or game. The same underlying phenomena creates the results for each trial. If using a fair 6-sided die on average the number of times 6 spots will appear on top is the same as any other side of the die. Calculating the average of a large number of die rolls, we will eventually converge to a result of 3.5, which is the expected value.
 
If we use a different unfair die for each roll, then all bets are off and the law of large numbers will not apply.
 

Gambler’s fallacy

 
If observing a series of random  events, say the flipping of a fair coin, and we note that the last two tosses resulted in heads, we may expect (incorrectly due to a misinterpretation of the law of large numbers) that the next flip of the coin will result in tails. As we know a fair coin has a 50% chance on each flip to land on heads or tails, therefore, for any given set of trials the results should be evenly split between the two outcomes.
 
The issue is, no one told the coin. The coin flip result is the result of the many small variations that occur to find it’s landing position. It has no memory or is not considering the tally of previous flips. It is just flying through the air and landing on one side or the other based on this one trial.
 
With a run of say 5 tosses resulting in heads, there is a 1/32 chance of that occurring. The chance of the next toss being heads or tails remains unchanged, it is 1/2 heads and 1/2 tails. We may wish the result to be a tails given the run of 5 heads, yet that is only our belief, not the probability.
 
If we repeat the experiment of tossing a coin five times many times (many being a large number – say thousands or millions of 5 toss experiments), the result on average would be 2.5 heads per group of five tosses. For any single toss the result is unknown with equal chance of a heads or tails, whether the first toss or the 1 millionth. None of the many trials provide any information about the results of sixth toss other than it remains 50/50.
Advertisements
This entry was posted in II. Probability and Statistics for Reliability and tagged by Fred Schenkelberg. Bookmark the permalink.

About Fred Schenkelberg

I am an experienced reliability engineering and management consultant with FMS Reliability, a consulting firm I founded in 2004. I left Hewlett Packard (HP)’s Reliability Team, where I helped create a culture of reliability across the organization, to assist other organizations. Given the scope of my work, I am considered an international authority on reliability engineering. My passion is working with teams to improve product reliability, customer satisfaction, and efficiencies in product development; and to reduce product risk and warranty costs. I have a Bachelor of Science in Physics from the United States Military Academy and a Master of Science in Statistics from Stanford University.

3 thoughts on “The Law of Large Numbers and the Gambler’s Fallacy

  1. A good friend of mine got a job because of how she answered a question on this topic. She was asked, “If I have flipped a coin ten times and each time was a head, what is the probability of getting a head on the eleventh flip?” Her response, “Let me look at the coin. The probability of getting 10 in a row is so low, that there must be something up with the coin.”

    • Hi Merrill,

      The probability of a fair coin getting 10 heads in a row is 0.5 ^ 10 = 0.0009766 – which is small and not much different than the component failure rates we rely upon all the time. Just because a capacitor works for 10 days in a row, does that mean there is something wrong with it?

      If the coin was fair, there is a 0.999 or so chance that something is not so fair about it – yet, just ten tosses is most likely not sufficient to conclude so unless we are willing to take the risk, small risk, that the 10 streak was just random chance of a fair coin.

      Cheers,

      Fred

      • Exactly. I did enjoy her story (true story), where recognizing that the data challenged the assumption of a fair coin was the point.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s