Difference between revisions of "1478: P-Values"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
m (decapitalization)
(Explanation: the "signifcant" error is no longer in the comic, so removed reference to it)
Line 19: Line 19:
 
Values higher than .1 should be considered not significant at all, however the comic suggests taking a part of the sample (a "subgroup") and analyzing that subgroup without regard to the rest of the sample. For example, in a study trying to prove that people always sneeze when walking by a particular street lamp, someone would record the number of people who pass the lamp and the number of people who sneezes. If the results don't get the desired ''p''<0.1, then pick a subgroup (e.g. OK, not all people sneeze, but look! women sneeze more than men, so let's analyze only women). Of course, this is not accepted scientific procedure as it's very likely to add sampling bias to the result.
 
Values higher than .1 should be considered not significant at all, however the comic suggests taking a part of the sample (a "subgroup") and analyzing that subgroup without regard to the rest of the sample. For example, in a study trying to prove that people always sneeze when walking by a particular street lamp, someone would record the number of people who pass the lamp and the number of people who sneezes. If the results don't get the desired ''p''<0.1, then pick a subgroup (e.g. OK, not all people sneeze, but look! women sneeze more than men, so let's analyze only women). Of course, this is not accepted scientific procedure as it's very likely to add sampling bias to the result.
  
The title text suggests that, if the results cannot be normally considered significant, to invert p<0.050, making it p>0.050. This is intended to fool casual readers, as the change is only to the inequality sign, which may go unnoticed. Notice that there is another mistake in the title text, specifically the word "significant" is misspelled as "signifcant". This is quite possibly not an author's error and rather another comical pun that further supports the "hope no one notices" theme featured in the same sentence.
+
The title text suggests that, if the results cannot be normally considered significant, to invert p<0.050, making it p>0.050. This is intended to fool casual readers, as the change is only to the inequality sign, which may go unnoticed.
  
 
==Transcript==
 
==Transcript==

Revision as of 13:46, 26 January 2015

P-Values
If all else fails, use "signifcant at a p>0.05 level" and hope no one notices.
Title text: If all else fails, use "signifcant at a p>0.05 level" and hope no one notices.

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Needs work to improve readability for non-statisticians.
If you can address this issue, please edit the page! Thanks.

This comic plays on how the significance of scientific experiments is measured and interpreted. The p-value is a statistical measure of how well the results of an experiment fit with the results predicted by the hypothesis. Low p-values occur when the results appear to reject the null hypothesis, whereas the high p-values suggest no relation between the hypothesis and the real world. The p-value calculated from the experiment data is used to interpret whether the experiment was significant and supports the hypothesis.

The significance threshold (usually 0.05) should be set prior the experiment in order to avoid ex-post changes in order to get a better experiment report. A simple change of this threshold (e.g. from 0.05 to 0.1) can change the experiment result with p-value=0.06 from "barely significant" to "significant".

The highest p-value at which most studies typically draw significance is p<0.05, which is why all p-values in the comic below that number are marked at least significant. 0.050 is labeled "Oh crap. Redo calculations," because the p-value is very close to being considered significant, but isn't. Redoing the calculations may result in a different answer, but it is not guaranteed that it will be lower than 0.050. Values that are higher than 0.050 and lower than .1 are considered to be suggesting significance without actually supporting it, which will likely support additional trials.

Values higher than .1 should be considered not significant at all, however the comic suggests taking a part of the sample (a "subgroup") and analyzing that subgroup without regard to the rest of the sample. For example, in a study trying to prove that people always sneeze when walking by a particular street lamp, someone would record the number of people who pass the lamp and the number of people who sneezes. If the results don't get the desired p<0.1, then pick a subgroup (e.g. OK, not all people sneeze, but look! women sneeze more than men, so let's analyze only women). Of course, this is not accepted scientific procedure as it's very likely to add sampling bias to the result.

The title text suggests that, if the results cannot be normally considered significant, to invert p<0.050, making it p>0.050. This is intended to fool casual readers, as the change is only to the inequality sign, which may go unnoticed.

Transcript

Ambox notice.png This explanation may be incomplete or incorrect: First draft.
If you can address this issue, please edit the page! Thanks.

There are two columns in a T-table labelled "p-value" and "interpretation". The interpretation column selects various areas of the P-value column.

P-values
P-value Interpretation
0.001 Highly significant
0.01
0.02
0.03
0.04 Significant
0.049
0.050 Oh crap. Redo calculations.
0.051 On the edge of significance
0.06
0.07 Highly suggestive, relevant at the p<0.10 level
0.08
0.09
0.099
≥0.1 Hey, look at this interesting subgroup analysis


comment.png add a comment! ⋅ comment.png add a topic (use sparingly)! ⋅ Icons-mini-action refresh blue.gif refresh comments!

Discussion

IMHO the current explanation is misleading. The p-value describes how well the experiment output fits hypothesis. The hypothesis can be that the experiment output is random. The low p-values point out that the experiment output fits well with behavior predicted by the hypothesis. The higher the p-value the more the observed and predicted values differ.Jkotek (talk) 08:54, 26 January 2015 (UTC)

High p-values do not signify that the results differ from what was predicted, they simply indicate that there are not enough results for a conclusion. --108.162.230.113 20:13, 26 January 2015 (UTC)

I read this comic as a bit of a jab at either scientists or media commentators who want the experiments to show a particular result. As the significance decreases, first they re-do the calculations either in the hope that result might have been erroneous and would be re-classified as significant, or intentionally fudge the numbers to increase the significance. The next step is to start clutching at straws, admitting that while the result isn't Technically significant, it is very close to being significant. After that, changing the language to 'suggestive' may convince the general public that the result is actually more significant than it is, while also changing the parameters of the 'significance' value allows it to be classified as significant. Finally, they give up on the overall results, and start pointing out small sections which may by chance show some interesting features.

All of these subversive efforts could come about because of scientists who want their experiment to match their hypothesis, journalists who need a story, researchers who have to justify further funding etc etc. --Pudder (talk) 09:01, 26 January 2015 (UTC)

I like how you have two separate categories - "scientists" and "researchers" with each having two different goals :) Nyq (talk) 10:12, 26 January 2015 (UTC)
As a reporter, I can assure you that journalists are not redoing calculations on studies. Journalists are notorious for their innumeracy; the average reporter can barely figure the tip on her dinner check. Most of us don't know p-values from pea soup.108.162.216.78 16:44, 26 January 2015 (UTC)
The press has at various times been guilty of championing useless drugs AND 'debunking' useful ones, but it's more to do with how information is presented to them than any particular statistical failing on their part. They can look up papers the same as anyone, but without a very solid understanding of the specific area of science there's no real way that a layman can determine if an experiment is flawed or valid or if results have been manipulated. Reporters (like anyone researching an area) at some point has to decide who to trust and who not to, and make up their own mind. It doesn't even matter if a reporter IS very scientifically literate, because the readers aren't and THEY have to take his word for it. Certainly reporters should be much more rigorous but there's more going on than just 'reporters need to take a stats class'. Journals and academics make the exact same mistakes too; skipping to the conclusion, getting exciting about breakthroughs that are too good to be true; and assuming that science and scientists are fundamentally trustworthy. And the answer isn't even that everyone involved should demand better proof, because that's exactly the problem we already have - What actually IS proof? Can you ever trust any research done by someone else? Can you even trust research that you were a part of? After all, any large sample group takes more than one person to implement and analyse, and your personal observations could easily not be representative of the whole. We love to talk about proof as being the beautifully objective thing, but in truth the only true proof comes after decades of work and studies across huge numbers of subjects which naturally never happens if the first test comes back negative, because no-one puts much effort into re-testing things that are 'false'. 01:29, 13 April 2015 (UTC)

This one resembles this interesting blog post very much.--141.101.96.222 13:26, 26 January 2015 (UTC)

null hypothesis.png
STEN (talk) 13:33, 26 January 2015 (UTC)

Heh. 173.245.56.189 20:06, 26 January 2015 (UTC)

See http://xkcd.com/882/ for using a subgroup to improve your p value. Sebastian --108.162.231.68 23:02, 26 January 2015 (UTC)

I agree. The part about p >= 0.1 reminded me of that comic. S (talk) 01:25, 27 January 2015 (UTC)

This comic may be ridiculing the arbitrariness of the .05 significance cutoff and alluding to the "new statistics" being discussed in psychology.[1]
108.162.219.163 23:06, 26 January 2015 (UTC)

The "redo calculations" part could just mean "redo calculations with more significant figures" (i.e. to see whether this 0.050 value is actually 0.0498 or 0.0503). --141.101.104.52 13:36, 28 January 2015 (UTC)

Agreed. I first understood it as someone thinking that 0.05 is a "too round" value, and some calculations tend to raise suspicions when these values pop up. 188.114.99.189 21:28, 7 December 2015 (UTC)
TL;DR

As someone who understands p values, IMO this explanation is way too technical. I really think the intro paragraph should have a short, simplified version that doesn't require any specialized vocabulary words except "p-value" itself. Then talk about controls, null hypothesis, etc, in later paragraphs. - Frankie (talk) 16:52, 28 January 2015 (UTC)

That is nearly impossible. I'm using the American Statistical Association's definition that "Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value" until a better one comes. That said, the difficulty of explaining p-value is no excuse to use the wrong interpretation of "probability that observed result is due to chance".--Troy0 (talk) 06:27, 24 July 2016 (UTC)

There's an irony in the use of hair colour as a suspect subgroup analysis... hair colour can factor in to studies. Ignoring the (probably wrong) common idea that red heads have a lower effectiveness rating for contraceptives, there do seem to be some suggestions that the recessive mutated gene does have implications beyond hair colour. Getting sunburn easily is one we all know, but how about painkiller and aesthetic efficiency? For example: http://healthland.time.com/2010/12/10/why-surgeons-dread-red-heads/ --141.101.99.84 09:15, 18 June 2015 (UTC)