Introduction to One-Way ANOVA

Let’s look at an introduction to one-way analysis of variance, sometimes shortened to ANOVA. Here, I have plotted three box plots corresponding to three samples from three separate populations. A natural question that arises is, is there strong evidence against the null hypothesis that the population means are all equal? And this is what one-way analysis of variance is going to test. And the alternative hypothesis is simply going to be that the null hypothesis is wrong, or in other words, that the population means are not all equal. One-way ANOVA is a statistical method that tests the null hypothesis, that K populations all have the same mean by comparing the variability between groups to the variability within groups. Now, it might not be immediately obvious what this means, so let’s take a look at this visually. If we take a look at this plot on the left for a moment, we’ll see these three different box plots corresponding to these three different samples.

And we can see that the sample means are five, seven, and six. So there is some variability between these sample means. There is some variability here between these samples. Over here on the right, we have a very similar type of setting. And the sample means over here, five, seven, and six, are exactly the same as the plot on the left. So there is the same variability between the sample means, the same variability between the averages, and the sample sizes are the same for all groups here. The fundamental difference between these two scenarios is that the variability within groups on the plot on the right is much greater than the variability within groups on the plot on the left. So there is greater variability within groups here than over on the left. Let’s see what the end result of an analysis of variance on these two separate data sets looks like. We will learn how to carry out this test as we go along.

But if we know something about hypothesis testing and we know what the null hypothesis is and we can properly interpret a p-value, we should be able to see here that, for this null hypothesis, that mu A and mu B and mu C are equal, or in other words, that these three samples come from three populations that all have the same population mean. The p-value is very, very small over here, so there is very strong evidence against this null hypothesis. We’ll see this test statistic is an F-test statistic, and that F-test statistic is going to help us get this p-value. But for now, we can look at this p-value and say there is very strong evidence against that null hypothesis. But if we look at the other side corresponding to the plot on the right, we’d see that our F-statistic is rather small, yielding a large p-value.

So we get a large p-value here, meaning that there is not strong evidence against this null hypothesis. Note again that the only difference between these two scenarios is that the variability within groups on this side is much greater than over here. What happened then is that the variability within the groups over here swamped the variability between the groups, and we didn’t see a significant difference. And to summarize, the p-values tell us that there is strong evidence against the null hypothesis that mu A and mu B and mu C are equal, but there is no real evidence against the null hypothesis that mu D, E, and F are equal.

One-way analysis of variance can be viewed as a generalization of the pooled variance two-sample t-test to more than two groups. And as such, the assumptions of one-way analysis of variance are the same as those of the pooled variants two-sample t-test. Specifically, we are assuming that we have independent, simple, random samples, that the populations are normally distributed, and that the population variances are equal. Or in other words, sigma one squared through sigma K squared are all equal. And we sometimes say that that’s equal to some common variance sigma squared. The end result of the calculations will be in an ANOVA table, and our ANOVA table is going to look something like this. The names over here can change: treatment sometimes, maybe groups, different names for these over here. But overall, the gist is the same. And I look at all of these calculations in another video, so this is just the general idea. Over here in the end, we get this F-statistic, and this F-statistic is going to yield a p-value for us. And we’re going to use that p-value to reach an appropriate conclusion.

Although it’s possible to do the calculations by hand, it’s typically best to use a computer. If we look at our A, B, C example from earlier, the ANOVA table looks something like this. This is the output from the statistical computer package R, but other statistical computing packages have very similar output. As you can see, R calls this top line "Group" and this bottom line "Residuals," where we called them "Treatment" and "Error" earlier. And the computer can do the brute force calculations for us and give us our end result of our F-statistic. And our F-statistic is going to yield this p-value, and our p-value will help us reach our conclusion. And since this p-value is very small, if you recall, we reached the conclusion that there is very strong evidence that these three samples come from populations that do not all have the same population mean. But we will learn how to carry out these calculations and look at another example in another video.