Let's play a game. We're going to flip a coin, you get to call heads or tails, and you get $1 if you win and pay me $1 if you lose.
Pretty straightforward, yes?
We flip the coin a bunch. It comes down roughly half heads and half tails.
Then, suddenly the coin comes down heads six times in a row.
I say, "Hold on, the coin is due for tails. I'm not taking a $1 vs. $1 bet here... I need you to put up $1.20 if you want to bet on tails."
Do you take that bet?
Okay. The coin comes down twenty times on tails in a row. Now it's really due for heads, right?
So, would you pay more than even odds to bet on heads?
(I hope not)
The gambler's fallacy is the idea that past, independent outcomes affect future outcomes. Because a coin will come down heads half the time and tails half the time, if you have a huge run of heads in a row... then tails must be "due", right?
This fallacy has lost a lot of people a lot of money.
If you're playing poker, your chance of getting the cards you want dealt to you are exactly the normal chances in any given hand. It doesn't matter how the game has been going the last 10 hands, 20 hands, or even the last 10 hours.
Same with flipping coins.
Same with anything with independent outcomes.
Now, sometimes outcomes aren't independent. If you know a "streaky" person who goes through manic phases and long slumps, you might predict a slump if the streaker has been on an extraordinarily long manic phase.
The key is figuring out whether events are independent or not. If they are - like coins or flipping cards - then nothing is ever "due," and wagering like something is "due" is a really bad idea.
Cards and flipping coins are obvious to see this on, but there's lots of real life applications. If you've been having a run of probability coming down favorably to you, it doesn't mean "bad luck" is due. If you've been having a run of bad probability, it doesn't change the chances of a good outcome.
If there's really a 50% chance of something happening, then it doesn't matter if it just happened 10 times in a row the other way. Random is actually rather quite random. You should probably think if your probability estimates are off if you see a ton of data in the other direction. And maybe the recent past isn't independent from the future - an athlete with bad performance lately might be injured or have screwed up mechanics.
But again, always - if you've got independent events, nothing is due, ever. Don't bet on something being "due" with independent outcomes or you're going to risk getting burnt.
Actually, if the coins comes up tails twenty times in a row, you should be happy to bet $1.20 to $1 on tails. (The probability of any twenty tosses coming up tails is ~1/1 000 000 for a theoretical fair coin. If you do see twenty tails, it's a good bet that the coin is not fair.)
More generally, pattern recognition works (although it's easy to generalize from too-small samples). If you've been having a string of good luck, maybe you should be doing more of whatever you've been doing.
An example for when it's correct to expect a turn of luck is when there is reversion to the mean, but in this case the events are not independent
In Kahneman and Tversky's book on heuristics and bias (i.e. flaws in human intuitive statistical judgement) they talk about the coin example in the first essay, sometimes it is derisively called the "law of small numbers".
Sebastian, great point. Randomness and probability bring out a lot of cognitive biases. I recall that there was a fair bit of discussion and even some conspiracy theories about shuffle play on iTunes playlists. People would hear songs by the same artist several times in a row and deduce that the sequence wasn't actually random. Apparently Apple tuned the algorithm to minimize that effect. Plenty about that out on the net, for example (randomly selected of course :) http://www.williamclayton.com/entries/itunes-shuffle-when-random-isnt-truely-random/
What Travis says about the Kelly Criterion is right on.
Also -- I could be totally wrong about this -- but I don't think flipping coins is as outcome independent as you think, especially if one person is doing the flipping.
There should, I believe, be some degree of "memory" that gets passed down the time series per the Hurst exponent when flipping coins.
For more on the Hurst exponent, see:
A more interesting observation is to look at the Kelly Criterion. Basically... the size of your bet (or bankroll) is also important! For example... in the lottery, even if expected value is "favorable" (ie. betting $1 with 1:1.0M chance of winning $1.1M) it may still make sense to NOT bet unless you have a HUGE bankroll.
Sit down before you read this.
We've got to talk.
Look. This is going to piss you off. This is going to look like I'm causing problems.
I'm not causing problems. I'm just pointing out where problems already exist.
Coin is a new startup that's trying to replace traditional credit cards. Its YouTube video has 6.8MM views. When you Google the word "coin" they show up as the #1 search result -- not only that, but news story results fill out much of the first page of search results. Not bad for a startup with a product that won't even be available for another 6+ months.
What did this startup do to have such massively successful launch? And why is it coming from a small startup vs. an established company in the space?
In the world of product launches, many companies rely on Paid Media (i.e., ads) to launch new products. But startups don't have the huge ad budgets that big companies do, so they have to get creative by leveraging Earned Media (i.e., you, on Facebook, talking about it). Just like Lockitron did last year, Coin has touched a nerve, hitting its $50,000 crowdfunding campaign goal in under an hour, according to this Forbes article. The founder was quoted as saying: