Science & Tech

Type 2 Error Explained: How to Avoid Hypothesis Testing Errors

Written by MasterClass

Last updated: Feb 16, 2023 • 4 min read

As you test hypotheses, there’s a potentiality you might interpret your data incorrectly. Sometimes people fail to reject a false null hypothesis, leading to a type 2 (or type II) error. This can lead you to make broader inaccurate conclusions about your data. Learn more about what type 2 errors are and how you can avoid them in your statistical tests.

Learn From the Best

What Is a Type 2 Error?

A type 2 error (or type II error) means you’ve accepted a false null hypothesis and prematurely disregarded your alternative hypothesis. Type 2 errors convey whether or not the statistical power of the test was high or low enough in your initial examination of a dataset.

Keep in mind this might not mean you have a true positive when it comes to your alternative, just that you’ve returned a false negative result for the null. You likely thought your alternative hypothesis did not return a statistically significant result when it actually did, at least to the point at which you can question the null hypothesis.

What Is a Type 1 Error?

Type 1 errors (or type I errors) return false positive results for alternative hypotheses, leading researchers to disregard and reject true null hypotheses. In other words, you incorrectly believe your statistical experiment was a success. According to statisticians, type 1 errors happen at the alpha level (or statistical significance level) of your results.

Type 1 vs. Type 2 Errors: What’s the Difference?

While both type I and type II errors can skew datasets, they do so in different ways. Learn more about some of the significant differences between these types of errors:

  • Ability to correct: When you attempt to reduce a type 1 error rate—by lowering the statistical significance level—it comes with the trade-off of increasing your type 2 error rate. The opposite is also true—raising your statistical significance level to stave off a type 2 error makes it more probable you’ll fall into the trap of a type 1 error. By increasing the sample size instead, you can make it easier to correct for the possibility of both errors at the same time.
  • Potential impact: If you have to choose between avoiding a type 1 or type 2 error, aim to avoid the former in most cases. This is because type 1 errors might lead to mistaken further experimentation or harmful real-world policies due to the acceptance of a false positive. A type 2 error, by contrast, cuts off research preliminarily but has little real-world effect otherwise. Still, tailor your own decision-making to the specifics of your unique statistical experiment to make the correct decision. Always do your best to avoid both types of errors if possible.
  • Source of the problem: A type 1 error means your statistical significance level is too high, while a type 2 error often means it’s too low. Of course, in either case, the source of the problem might arise from something different, but the significance level remains one of the most probable causes. To check this, see how your p-value (or the results of your test) measures up against the significance level and whether those findings seem incoherent.

Type 2 Error Example

Suppose a drug company hopes to prove its new medication can lower cholesterol and reduce heart disease. When the results come in, the results fall short of the usual 0.05 alpha or significance level.

In some cases, this might mean the drug did prove ineffective. In others, it might be a sign to rerun the test with both a larger sample size and an increased alpha level to avoid accepting a null hypothesis as still relevant. With better inputs and parameters, your alternative hypothesis might still overturn the null.

How to Reduce Type 2 Errors

It’s possible to reduce the probability of a type 2 error in your statistical hypothesis testing. Keep these tips in mind as you strive for greater accuracy:

  • Enlarge the sample size. If you use a larger random sample, you help mitigate your risk of causing a type 2 error. The more information you use to fill out the parameters of your test, the more positive you can be you’ve represented as thorough a breadth of data as possible. This also has the added benefit of decreasing the probability of a Type 1 error.
  • Increase the significance level. In general, you set your statistical level of significance to 0.05 to test whether or not you should reject a null hypothesis. To mitigate the likelihood of a type 2 error, you can raise this significance level to around 0.10 or higher. This raises the bar for whether or not you’ve obtained a statistically significant result. This does, unfortunately, come with the negative side effect of increasing the likelihood of a type 1 error.
  • Reevaluate your data. Your statistical results will be only as good as the information you use at the start of your experiment. Try to remain vigilant about this process so you don’t have to go back and start again. Type 2 errors often result from people getting too lackadaisical about recording accurate data. Still, if something seems off about the results of the test as you conclude your initial research, rerun the test rather than accept potentially inaccurate results.

Learn More

Get the MasterClass Annual Membership for exclusive access to video lessons taught by science luminaries, including Terence Tao, Bill Nye, Neil deGrasse Tyson, Chris Hadfield, Jane Goodall, and more.