One in a Million and e
Perhaps the most common way of expressing the probability of an event in every day language is to refer to its "1 in \(n\)" chance of happening.
I've often found that in general there is a misunderstanding that if you have \(n\) observations of an event, and the odds of it happening are 1 in \(n\) then surely it must have happened! If your odds of being attacked by a shark are 1 in 3,700,000 then in a city of 3,700,000 there must be a near certain chance that somebody has been attacked by a shark! If your odds of winning Roulette are 1 in 38 then if you play 38 times you must be extremely likely to win! If you're in a room with 365 people surely one of them shares your birthday!
Now of course this is obviously false for small numbers. Everyone knows that a coin has 1 in 2 chance of landing heads up, but it is not difficult to observe 4 tails in a row. Take a second and ask yourself "how likely is it that something with a 1 in \(n\) chance of happening will happen after \(n\) trials?" Does this likelihood go up or down as \(n\) increases?
In probability the best way to approach the question of "How likely is this to happen at least once?" is to answer the much easier question of "How likely is this to never happen?" If we can figure out \(P(\text{never})\), then we know \(P(\text{atLeastOnce}) = 1-P(\text{never})\). If this wasn't true that would mean that there is some event which is in-between never and at least once. To avoid having to add "1-" to everything we calculate, let's just focus on the probability of an event never happening.
We'll start with a coin flip which has 1 in 2 chance of coming up heads. If the \(P(\text{heads}) = 1/2\) then the probability of "not heads" is \(P(\text{not heads}) = 1 - 1/2\). We didn't talk about the probability of tails because we are looking for a general solution, and having just one unique alternative only happens in the case of \(n = 2\). Because we are looking at the probability of 'not heads' for 2 trials we simply multiply the \(P(\text{not heads})\) by itself. So our final calculation for \(n = 2\) is:$$(1 - 1/2)(1 - 1/2) = (1 - 1/2)^2 = 0.25$$
Let's quickly go through the case of \(n = 6\), which is easily modeled by a single die. The odds of never rolling a '6' on a six-sided die after 6 rolls would be would be:$$(1-1/6)^6 = 0.334898$$
Pretty quickly we can see the general form of our equation is going to be:
$$P(\text{never}) = (1-\frac{1}{n})^n$$
Now that we have a function to model this behavior we can plot it out and see what happens over time.
We can see that this function starts to converge to about 0.37. Since this is our probability of the event never happening we know that in general it happens at least once is about 0.63. This is really interesting! That means if there's a 1 in a 1,000,000 chance to win the lottery, and a million people play, there's just shy of 2/3rd chance that somebody actually won! Given the ubiquity of 1 in \(n\) probabilities this is a very useful for back of the envelope calculations.
Just one more thing...
We're actually not the first people to stumble across this interesting observation. Around 1690 the mathematician Jacob Bernoulli came across this same problem. But there's something even more magical about this discovery. This constant that Bernoulli and we both found is actually \(\frac{1}{e}\), where \(e\) is the famous Euler's number! $$\lim_{n \to \infty} (1-\frac{1}{n})^n = 1/e$$In a very real, and somewhat strange way, we have independently found \(e\)!
If you enjoyed this post please subscribe to keep up to date and follow @willkurt!!