Editing 2059: Modified Bayes' Theorem
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 8: | Line 8: | ||
==Explanation== | ==Explanation== | ||
− | {{w|Bayes' Theorem}} is an equation in | + | {{incomplete|Please edit the explanation below and only mention here why it isn't complete. Do NOT delete this tag too soon.}} |
+ | {{w|Bayes' Theorem}} is an equation in statistics that gives the probability of a given hypothesis accounting not only for a single experiment or observation but also for your existing knowledge about the hypothesis, i.e. its prior probability. Randall's modified form of the equation also purports to account for the probability that you are indeed applying Bayes' Theorem itself correctly by including that as a term in the equation. | ||
Bayes' theorem is: | Bayes' theorem is: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <math>P(H \mid X) = \frac{P(X \mid H) \, P(H)}{P(X)}</math>, | |
+ | where | ||
+ | *<math>P(H \mid X)</math> is the belief that <math>H</math> is true given that <math>X</math> is true. The posterior probability of <math>H</math>. | ||
+ | *<math>P(X \mid H)</math> is the belief that <math>X</math> is true given that <math>H</math> is true. | ||
+ | *<math>P(H)</math> and <math>P(X)</math> are the beliefs that <math>H</math> and <math>X</math> respectively are true independent of other evidence. They are the prior probability of H and X respectively. | ||
− | + | If <math>P(C)=1</math> the modified theorem reverts to the original Bayes' theorem (which makes sense, as a probability one would mean certainty that you are using Bayes' theorem correctly). | |
− | |||
− | |||
− | + | If <math>P(C)=0</math> the modified theorem becomes <math>P(H \mid X) = P(H)</math>, which says that the belief in your hypothesis is not affected by the result of the observation (which makes sense because you're certain you're misapplying the theorem so the outcome of the calculation shouldn't affect your belief.) | |
− | + | This happens because, if you apply the original theorem, the modified theorem can be rewritten as: <math>P(H \mid X) = P(H)(1-P(C)) + P(H \mid X)P(C)</math>. This is the [https://en.wikipedia.org/wiki/Linear_interpolation linear-interpolated] weighted average of the belief you had before the calculation and the belief you would have if you applied the theorem correctly. This goes smoothly from the not believing your calculation at all, keeping the same belief as before if <math>P(C)=0</math> to changing your belief exactly as Bayes' theorem suggests when <math>P(C)=1</math>. | |
− | + | <math>1-P(C)</math> is the probability that you are using the theorem incorrectly. | |
− | Modified | + | The title text suggests that an additional term should be added for the probability that the Modified Bayes Theorem is correct. But that's *this* equation, so it would make the formula self-referential. It could also result in an infinite regress -- we'd need another term for the probability that the version with the probability added is correct, and another term for that version, and so on. It's also unclear what the point of using an equation we're not sure of is (although sometimes we can: {{w|Newton's Laws}} are not as correct as the Einstein's {{w|Theory of Relativity}} but they're a reasonable approximation in most circumstances}. |
==Transcript== | ==Transcript== | ||
− | + | {{incomplete transcript|Do NOT delete this tag too soon.}} | |
+ | |||
:Modified Bayes' theorem: | :Modified Bayes' theorem: | ||
− | |||
:P(H|X) = P(H) × (1 + P(C) × ( P(X|H)/P(X) - 1 )) | :P(H|X) = P(H) × (1 + P(C) × ( P(X|H)/P(X) - 1 )) | ||
− | |||
:H: Hypothesis | :H: Hypothesis | ||
:X: Observation | :X: Observation |