Flaws: Confidence Interval and P-value

Confidence Interval Problem

A type II error is a statistical term used within the context of hypothesis testing that describes the error that occurs when one fails to reject a null hypothesis that is actually false. In other words, it produces a false positive… A type II error is sometimes called a beta error.

A type II error can be reduced by making more stringent criteria for rejecting a null hypothesis. For instance, if an analyst is considering anything that falls within a +/- 95% confidence interval as statistically significant, by increasing that tolerance to =/- 99% you reduce the chances of a false positive. However, doing so at the same time increases your chances of encountering a type I error. When conducting a hypothesis test, the probability or risks of making a type I error or type II error should be considered…

The difference between a type II error and a type I error is that a type I error rejects the null hypothesis when it is true (a false negative). The probability of committing a type I error is equal to the level of significance that was set for the hypothesis test. Therefore, if the level of significance is 0.05, there is a 5% chance a type I error may occur.

The probability of committing a type II error is equal to 1 minus the power of the test, also known as beta. The power of the test could be increased by increasing the sample size, which decreases the risk of committing a type II error.

Adam Hayes. (Apr 19, 2019). Type II Error. https://www.investopedia.com/terms/t/type-ii-error.asp. Canada/US. [Investopedia is part of the Dotdash publishing family and operates under CEO Neil Vogel and the rest of the Dotdash Senior Management Team.]

P-value Difficulty

…Texas A&M University professor Valen Johnson, writing in the prestigious journal Proceedings of the National Academy of Sciences, argues that p less than .05 is far too weak a standard.

Using .05 is, he contends, a key reason why false claims are published and many published results fail to replicate. He advocates requiring .005 or even .001 as the criterion for statistical significance.

The p value is at the heart of the most common approach to data analysis – null hypothesis significance testing (NHST). Think of NHST as a waltz with three steps…

Most researchers don’t appreciate that p is highly unreliable. Repeat your experiment and you’ll get a p value that could be extremely different. Even more surprisingly, p is highly unreliable even for very large samples…

…there’s a price to pay for demanding stronger evidence. In typical cases, we’d need to roughly double our sample sizes to still have a reasonable chance of finding true effects. Using larger samples would indeed be highly desirable, but sometimes that’s simply not possible…

…The core problem is that NHST panders to our yearning for certainty by presenting the world as black or white — an effect is statistically significant or not; it exists or it doesn’t. In fact our world is many shades of grey — I won’t pretend to know how many. We need something more nuanced than NHST, and fortunately there are good alternatives.

Bayesian techniques are highly promising and becoming widely used. Most readily available and already widely used is estimation based on confidence intervals.

A confidence interval gives us the best estimate of the true effect, and also indicates the extent of uncertainty in our results. Confidence intervals are also what we need to use meta-analysis, which allows us to integrate results from a number of experiments that investigate the same issue.

We often need to make clear decisions — whether or not to licence the new drug, for example — but NHST provides a poor basis for such decisions. It’s far better to use the integration of all available evidence to guide decisions, and estimation and meta-analysis provides that…

Geoff Cumming. The problem with p values: how significant are they, really?
(November 12, 2013). https://theconversation.com/the-problem-with-p-values-how-significant-are-they-really-20029. The Conversation Media Group, Level 1, 715 Swanston Street, Parkville, VIC 3010, Australia.

Related

Advertisements

One thought on “Flaws: Confidence Interval and P-value

  1. Pingback: Statistics – General Paper & Life – Neophyte Writers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.