Posts Tagged ‘theoretical probability’

To see part 2, click here.

Before we go much further, I need to lay a bit of a foundation in order to further explain what I’m trying to accomplish. As I mentioned above, randomness is a massively misunderstood concept that controls more facets of life than you think—some that you will appreciate, and perhaps some that you won’t—but it is also a massively powerful tool that, to some degree, enables us to improve our own well-being with the digestion of a few basic ideas.

We will start with an academic exercise: imagine you were holding a penny, and assume for the sake of simplicity that the penny is fair (that is, it is evenly weighted). Now imagine you were to flip that penny 50 times. On a sheet of paper, record the results of those 50 coin flips (it’s important here that you record only what you perceive the results to be—don’t actually flip a coin 50 times). With that done, let’s consider the hypothetical experiment we just performed: the penny, being fair, has an equal chance of landing heads or tails in each of those 50 flips. Logic, then, would indicate that we might end up with something close to a total of 25 heads and 25 tails. That being said, I think most would be able to accept that we didn’t break some vitally important law of probability, if for example we ended up with 21 heads and 29 tails. Indeed, these fluctuations in the results we obtain versus the results we expect happen frequently in even the most carefully concocted scientific studies. Those fluctuations are, in effect, the crux of this paper: because those errors, in many different types of experiments, are random in nature. So, if I were to roll a die 50 times, while I may end up with 24 heads and 26 tails, you could end up with 17 heads and 33 tails, and dozens of other folks might end up with entirely different distributions of results. This fluctuation is expected—albeit hard to predict with a small number of experiments—but we will get into more detail on that later.

In the world of statistics, flipping a coin one time is called a Bernoulli experiment, and is not limited to coins. The basis of a Bernoulli experiment is that there are two possible outcomes: success and failure. We might say that flipping the coin and getting heads would be a success, or we could call it failure—but no matter how we phrase it, what you’re left with is two possible outcomes. And, Bernoulli experiments do not require the odds of each outcome to be equal—as they are with the fair coin. For example, if rolling a single die, one might consider success as rolling a 2 or a 5, and any other outcome (that is, 1, 3, 4 or 6) failure. In that case, the chance of success is 26, or equivalently 13. Note that this is calculated by taking the number of possible successful outcomes, divided by the total number of possible outcomes (whether success or failure).

To take that a step further, the act of repeating a Bernoulli experiment a given number of times is called a binomial experiment. So, our 50 coin flips qualify as such. Calculating the chances of the outcome, as long as the weight of each individual variable is known, is simple: all you have to do is multiply. So if we repeated our die-rolling experiment five times, the chance of success (as defined above) in all five rolls is 13×13×13×13×13=135=1243.

I should also take time to explain the idea of an independent variable. For the sake of this article, a variable is just one of the outcomes that we’ve been discussing. An independent variable is one that is mutually exclusive of any other of the other variables. In the case of our die-rolling, if I were to roll a 3 on my first throw, that result would not have any effect on future throws of the die—and so my chance of success is always 13. This concept has some far-reaching consequences: consider the lottery. In your own Bernoulli experiment (or, if you’re a glutton for punishment, your own binomial experiment), suppose you define “success” as winning the lottery. And, every time you play, chances are high that you will fail. But consider that each time you play the lottery, your variable (the outcome of each trip to the corner store to buy your ticket) is independent of all the other trips you’ve made to the food mart, and as such, your odds of winning never increase.

Let’s now revisit our coin flipping experiment. Here is a list of outcomes from 50 random flips of a coin:

HeadsTails

At first glance, it doesn’t look like anything special – just a grid of T’s and H’s. And, we have 22 heads and 28 tails—which is certainly within comfort. But consider if you had been flipping a coin and obtaining these results, how odd the last 10 tosses may seem to you: notice that we obtained tails on six consecutive flips of the coin. Compare these results with your own: what is the highest number of consecutive flips (of heads or tails) that you imagined? I’d be willing to bet it was less than six!

Rest assured, we have not accomplished anything rare here—and I will explain why. On the surface, this may seem like a feat—based on our reasoning above, in six consecutive flips, the chance of flipping a coin and having it land tails side up all six times is 126=164. But that doesn’t tell the whole story, because we’re not only flipping the coin six times. And, in 50 consecutive flips, the number of times that we have six consecutive flips (of either heads or tails) during the course of our experiment is 45. So, we have 45 opportunities for success, and we’d expect success in at least one of 64 opportunities—so we might expect that, if we were to repeat the experiment again, that the chance of having six of the same outcomes consecutively would be 4564, or around 70%. In other words, if we were to do this experiment (the 50 coin flips) 100 times, we might expect to see six of the same outcomes consecutively in approximately 70 of those experiments.

The moral of the story here is, aside from what we expect to happen (which is important for other reasons), the result of a single coin flip is random. And, since each flip is independent, how the coin falls is, in every trial, random. So, if you’re looking for a random collection of 0’s and 1’s, or H’s and T’s, or successes and failures, you need not look any further than your own couch cushions.

It’s getting to the point that I’m having difficulty breaking this into smaller chunks–so there may a couple of larger posts coming up in this series.  Thanks as always!  Be well and stay tuned.