# 2001: Clickbait-Corrected p-Value

Clickbait-Corrected p-Value |

Title text: When comparing hypotheses with Bayesian methods, the similar 'clickbayes factor' can account for some harder-to-quantify priors. |

## Explanation[edit]

Clickbait is the practice of using deceptive or hyperbolic headlines to entice readers to click on a dubious or sensationalist news story, often with the purpose of generating site traffic and ad revenue. Randall uses the scientific controversy regarding the health effects of chocolate to humans as an example, as there is widespread misinformation on the health effects of chocolate online. In fact, there are no reliable studies to confirm any health effects while no medical authority has approved any health claims regarding chocolate.

Hypothesis testing in statistics is a standard method to determine whether a particular hypothesis is supported by the data. For the topic given in this comic, a researcher might compare data on athletic performance with data on chocolate consumption by those athletes to determine whether the two trend together. By convention, the "null hypothesis" (denoted H_{0}) is that there is no correlation (e.g. chocolate doesn't affect athletic performance) while the "alternate hypothesis" (H_{1}) would be that they are correlated. (If the study consists of *feeding* chocolate to one of two identical groups and not the other, rather than tracking what they'd be eating anyway, then the alternative hypothesis can be strengthened to be that chocolate causes improved performance.) These sets are subjected to statistical tests which return a "test statistic". From that test statistic a "p-value" is calculated. The p-value indicates the probability of observing the obtained results (or any more extreme value), when the null hypothesis is true (e.g. chocolate has no effect on athletic performance).

In other words, the p-value is an indicator as to the statistical significance and consequential reliability of the results affirming the "alternate hypothesis"(not the probability that the null hypothesis is correct). It answers the question: If there is no correlation, how likely was it that I saw a correlation at least this big? Hence, if the p-value is low enough (by convention < 0.05), the null hypothesis is rejected, and we conclude that the alternate hypothesis is supported by the data (NOT that it is "correct" or "true").

In this comic, the p-value is corrected by a factor that takes clickbait into account. This factor has the effect of increasing the p-value if H_{1} is more clickbaity than H_{0}, and decreases the p-value if H_{0} is more clickbaity than H_{1}. This suggests that whatever clickers of clickbait believe, the reverse is likely to be true.

Furthermore, this factor may be interpreted as normalisation for the inherent selection bias where the p-values for more clickbaity H_{1}s tend to be lower than they should be and p-values for non-clickbaity H_{0}s to be higher than they should be. For example, one explanation could be that for p-values that are on the cusp of significance, researchers may be more incentivized to fudge and adjust the data to get the p-value down if the H_{1} is highly sensational, since the H_{1} would make the research more likely to get published and attract attention. (See also FiveThirtyEight's article on p-hacking and this Stack Exchange question about p-hacking in the wild.) P-hacking has also previously already been associated with chocolate and media sensationalism.

As the statistical results now depend on people's beliefs about the hypothesis, this could appear as far from actual science as one can get. However, in a way, it is more in tune with a quote by John Arbuthnot (one of the originators of the use of p-values) attributing variation to active thought rather than chance, "from whence it follows, that it is Art, not Chance, that governs." Randall applying that quote to the thoughts of the masses brings it in line with "Art".

If this correction could be somehow enforced on the scientific world, it would have the effect of keeping the popular view of scientific results more in line with reality. Often one study will be performed that shows an exciting result, and consequently be sensationalised by the media prior to further studies to verify it. This is in part due to the conflicting interest of the scientific community and the media. The clickbait correction may aid a reader in exercising caution when interpreting sensationalist scientific discoveries in news media. Additionally, there can be a problem in some areas of science where more mundane results never undergo the third-party replication studies (see replication crisis), or perhaps are even never studied in the first place. The clickbait correction factor has the opposite effect on these more mundane topics, making it easier to demonstrate effects within them with a lower statistical barrier for entry, perhaps in the hope that more will get studied, published, and exposed to the public.

Technically, the comic's depiction of null and alternative hypotheses is not entirely correct. As the alternative hypothesis (H_{1}) predicts that chocolate will *improve performance* (i.e., a one-tailed, directional hypothesis), the null hypothesis (H_{0}) should predict that chocolate will do nothing *or* make performance worse. In other words, the alternative hypothesis should be true if and only if the null hypothesis is false. For example, alternatively, if the H_{1} were to say that *chocolate will change performance* (for better or worse; i.e., a two-tailed hypothesis) then H_{0} should say that *chocolate will do nothing*.

The title text refers to Bayesian statistics, a statistical technique which involves considering (before you see the new data) how likely you think it is that the hypothesis is true. (It is worth noting that the traditional statistical analysis described above, doesn't directly say anything about how likely the hypothesis is to be *true*. It simply assesses whether the data is consistent with the null hypothesis.) Under Bayesian analysis, you begin with a prior probability, or simply just "prior", which expresses how likely you think the alternate hypothesis is. Then after seeing the new data, you apply Bayes' theorem to *update* your belief about the hypothesis, and as a result you should then consider the hypothesis to be more likely (or less likely) than you considered it before.

Bayesian statistics therefore recognizes that an extraordinary claim should require more evidence to convince you than a "reasonable" claim would. (Which is, arguably, sort of, the same point being made by the Clickbait-correction.) But also that *enough* evidence, perhaps gathered step by step over time, should be sufficient to convince you even of extraordinary claims.

The technique can be hard to apply in science however, because of the difficulty in agreeing upon reasonable priors. Here it's suggested that an alternative "clickbayes factor" (a pun and portmanteau of clickbait and Bayesian) could be used to approximate hard to quantify priors.

## Transcript[edit]

- [Under a heading that says Clickbait-Corrected p-Value there is a mathematical formula. Below that is the description of the two used variables and what they mean:]
- Clickbait-corrected p-value:

- P
_{CL}= P_{traditional}∙ click(H_{1})/click(H_{0})

- H
_{0}: NULL hypothesis ("Chocolate has no effect on athletic performance") - H
_{1}: Alternative hypothesis ("Chocolate boosts athletic performance") - click(H): Fraction of test subjects who click on a headline announcing that H is true

**add a comment!**⋅

**add a topic (use sparingly)!**⋅

**refresh comments!**

# Discussion

I thought this comic was about *correcting* for any p-hacking that aimed to increase the media presence (and thus the clickbait) of the study. 172.68.94.10 17:32, 1 June 2018 (UTC)

The explanation for null hypothesis is correct semantically, it would be accepted if there was no OR negative improvement, however, this is usually stated more succinctly as "will not improve performance" or (in keeping with the language of the comic) "does not boost performance", since that has the same meaning without the unnecessary verbosity. ---- 162.158.186.42 (talk) *(please sign your comments with ~~~~)*

I can't believe I clicked on this 172.68.86.46 20:28, 1 June 2018 (UTC)

I've removed a paragraph which claimed that this was an instance of Bayes theorem. Despite some similarity in structure, it is not. Winstonewert (talk) 01:39, 2 June 2018 (UTC)

I was honestly expecting a comic about (or at least referencing) 2001: A Space Odyssey. Herobrine (talk) 07:41, 2 June 2018 (UTC)

If reseachers were to use this adjusted formula, it would make sensational results much harder to demonstrate as significant, and uninteresting results much easier. Seems to me it’s a good adjustment for a lot of things. I wonder about p-values, though ... seems to me a value that is at all borderline just means you don’t have enough data yet for the actual size of the effect you’re measuring, but I don’t know much about statistics. 172.68.54.130 02:08, 3 June 2018 (UTC)

Ummm. I use a Gecko engine* with "Block Advertisement" checked. *(K-Meleon 76.0) I can see the image from "xkcd Phone 2000" and "LeBron James and Stephen Curry", but NOT THIS PAGE. Unless I uncheck "Block Advertisement". Obviously this is to encourage clicking on things? 172.68.2.70 09:29, 4 June 2018 (UTC)

This could be an attempt to correct for the effects described in the infamous Iohannides paper:

In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller[...] where there is greater flexibility in designs, [...] where there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.

--162.158.90.192 23:04, 19 June 2018 (UTC)

**Incomplete?**

This comic is labeled as incomplete, but the explanation seems pretty thorough as it is. Any explanation can be cleaned up ad infinitum to suit people's liking, but this one seems pretty good as it is. Is the incomplete tag still warranted at this point?--Sensorfire (talk) 18:46, 1 October 2018 (UTC)

- There were many edits recently because this comic is mentioned at the sitenotice on top here, if you now understand what a p-Value is, feel free to remove that incomplete tag. I personally prefer a more straight forward and shorter explanation. But that's only my opinion. When this comic is not labeled incomplete anymore I will put some else to that sitenotice. --Dgbrt (talk) 21:23, 1 October 2018 (UTC)
- If this wiki tracked pageviews, somebody could put forth a hypothesis of something measurable on the site, see how many clicks each hypothesis got, and produce a real clickbait-adjusted p-value for it. 162.158.79.107 02:52, 5 October 2018 (UTC)

Still incomplete because if you google for this "chocolate health" you will understand. --Dgbrt (talk) 19:20, 5 October 2018 (UTC)

true -> so; will -> shall; if and only if -> if; hard -> touh Lysdexia (talk) 07:59, 25 July 2019 (UTC)