PTLOS অনুশীলন 4.1 কি কেউ সমাধান করেছেন?


19

এটি প্রব্যাবিলিটি থিওরিতে প্রদত্ত একটি অনুশীলন : 2003 এডউইন জয়েনেস র লজিক অফ সায়েন্স । এখানে একটি আংশিক সমাধান রয়েছে । আমি আরও সাধারণ আংশিক সমাধান নিয়ে কাজ করেছি, এবং ভাবছিলাম যে অন্য কেউ এটি সমাধান করেছেন কিনা। আমি আমার উত্তর পোস্ট করার আগে কিছুটা অপেক্ষা করব, অন্যকে যেতে দিতে।

ঠিক আছে, তাই অনুমান করা আছে এনn পারস্পরিক একচেটিয়া এবং সম্পূর্ণ অনুমান, দ্বারা প্রকাশ এইচ আমি( আমি = 1 , ... , এন )Hi(i=1,,n) । আরও অনুমান আমরা আছে মিm ডেটা সেট, দ্বারা প্রকাশ ডি ( জে = 1 , , মি )Dj(j=1,,m) । Ith হাইপোথিসিসের সম্ভাবনা অনুপাত দ্বারা দেওয়া হয়েছে:

এল আর ( এইচ আই ) = পি ( ডি 1 ডি 2, ডি এম | এইচ আই )পি ( ডি 1 ডি 2, ডি এম | ¯ এইচ আই )

LR(Hi)=P(D1D2,Dm|Hi)P(D1D2,Dm|H¯¯¯¯¯i)

নোট করুন যে এগুলি শর্তযুক্ত সম্ভাবনা। এখন যে অনুমান দেওয়া হল ith হাইপোথিসিস এইচ আমিHi মিm ডেটা সেট স্বাধীন, তাই আমরা আছে:

পি ( ডি 1 ডি 2, ডি এম | এইচ আই ) = এম = 1 পি ( ডি জে | এইচ আই )( i = 1 , , এন )শর্ত ১

P(D1D2,Dm|Hi)=j=1mP(Dj|Hi)(i=1,,n)Condition 1

ডোনমিনেটর যদি এই পরিস্থিতিতে বিবৃত হয় তবে এটি আমাদের পক্ষে যথেষ্ট সুবিধাজনক হবে:

পি ( ডি 1 ডি 2, ডি এম | ¯ এইচ আই ) = এম = 1 পি ( ডি | ¯ এইচ আই )( i = 1 , , এন )শর্ত ২

P(D1D2,Dm|H¯¯¯¯¯i)=j=1mP(Dj|H¯¯¯¯¯i)(i=1,,n)Condition 2

For in this case the likelihood ratio will split into a product of smaller factors for each data set, so that we have:

LR(Hi)=mj=1P(Dj|Hi)P(Dj|¯Hi)

LR(Hi)=j=1mP(Dj|Hi)P(Dj|H¯¯¯¯¯i)

So in this case, each data set will "vote for HiHi" or "vote against HiHi" independently of any other data set.

The exercise is to prove that if n>2n>2 (more than two hypothesis), there is no such non-trivial way in which this factoring can occur. That is, if you assume that condition 1 and condition 2 hold, then at most one of the factors: P(D1|Hi)P(D1|¯Hi)P(D2|Hi)P(D2|¯Hi)P(Dm|Hi)P(Dm|¯Hi)

P(D1|Hi)P(D1|H¯¯¯¯¯i)P(D2|Hi)P(D2|H¯¯¯¯¯i)P(Dm|Hi)P(Dm|H¯¯¯¯¯i)
is different from 1, and thus only 1 data set will contribute to the likelihood ratio.

I personally found this result quite fascinating, because it basically shows that multiple hypothesis testing is nothing but a series of binary hypothesis tests.


I'm a little confused by the index on ˉHiH¯i; is ˉHi=argmaxhHiP(D1,Dm|h)H¯i=argmaxhHiP(D1,Dm|h)? Or is it ˉHi=argmaxh{H1,,Hn}P(D1,Dm|h)H¯i=argmaxh{H1,,Hn}P(D1,Dm|h)? Seems like it ought to be the latter, but then I'm not sure why the subscript. Or maybe I'm missing something else entirely :)
JMS

@JMS - ¯HiH¯¯¯¯¯i stands for the logical statement "HiHi is false", or that one of the other hypothesis is true. So in "Boolean algebra" we have ¯HiH1+H2++Hi1+Hi+1++HnH¯¯¯¯¯iH1+H2++Hi1+Hi+1++Hn (because the hypothesis are exclusive and exhaustive)
probabilityislogic

I feel like there has to be a more intuitive solution than the algebra given in Sanders' partial solution. If the data are independent given each of the hypotheses then this continues to hold when the priors of the hypothesis are varied. And somehow, the result is that the same must apply for the conclusion...
charles.y.zheng

@charles - I know exactly how you feel. I thought I could derive it using some qualitative inconsistency (Reductio ad absurdum), but I couldn't do it. I Could extend Sander's maths though. And it is Condition 2 which is "the dodgy one" in terms of what the result means.
probabilityislogic

@probabilityislogic "it basically shows that multiple hypothesis testing is nothing but a series of binary hypothesis tests." Please, could you expand on this sentence? By reading page 98 from Jaynes' book, I understand that you can reduce testing of H1,,HnH1,,Hn to testing H1H1 against each other hypothesis and then somehow normalize to get the posterior for H1H1, but I do not understand why this would follow from the results of Excercise 4.1.
Martin Drozdik

উত্তর:


7

The reason we accepted eq. 4.28 (in the book, your condition 1) was that we assumed the probability of the data given a certain hypothesis HaHa and background information XX is independent, in other words for any DiDi and DjDj with ijij:

P(Di|DjHaX)=P(Di|HaX)(1)

P(Di|DjHaX)=P(Di|HaX)(1)
Nonextensibility beyond the binary case can therefore be discussed like this: If we assume eq.1 to be true, is eq.2 also true?

P(Di|Dj¯HaX)?=P(Di|¯HaX)(2)

P(Di|DjHa¯¯¯¯¯¯X)=?P(Di|Ha¯¯¯¯¯¯X)(2)
First lets look at the left side of eq.2, using the multiplication rule:

P(Di|Dj¯HaX)=P(DiDj¯Ha|X)P(Dj¯Ha|X)(3)

P(Di|DjHa¯¯¯¯¯¯X)=P(DiDjHa¯¯¯¯¯¯|X)P(DjHa¯¯¯¯¯¯|X)(3)
Since the nn hypotheses {H1Hn}{H1Hn} are assumed mutually exclusive and exhaustive, we can write: ¯Ha=baHb
Ha¯¯¯¯¯¯=baHb
So eq.3 becomes: P(Di|Dj¯HaX)=baP(Di|DjHbX)P(DjHb|X)baP(DjHb|X)=baP(Di|HbX)P(DjHb|X)baP(DjHb|X)
P(Di|DjHa¯¯¯¯¯¯X)=baP(Di|DjHbX)P(DjHb|X)baP(DjHb|X)=baP(Di|HbX)P(DjHb|X)baP(DjHb|X)
For the case that we have only two hypotheses, the summations are removed (since there is only one baba), the equal terms in the nominator and denominator, P(DjHb|XP(DjHb|X), cancel out and eq.2 is proved correct, since Hb=¯HaHb=Ha¯¯¯¯¯¯. Therefore equation 4.29 can be derived from equation 4.28 in the book. But when we have more than two hypotheses, this doesn't happen, for example, if we have three hypotheses: {H1,H2,H3}{H1,H2,H3}, the equation above becomes:P(Di|Dj¯H1X)=P(Di|H2X)P(DjH2|X)+P(Di|H3X)P(DjH3|X)P(DjH2|X)+P(DjH3|X)
P(Di|DjH1¯¯¯¯¯¯X)=P(Di|H2X)P(DjH2|X)+P(Di|H3X)P(DjH3|X)P(DjH2|X)+P(DjH3|X)
In other words: P(Di|Dj¯H1X)=P(Di|H2X)1+P(DjH3|X)P(DjH2|X)+P(Di|H3X)1+P(DjH2|X)P(DjH3|X)
P(Di|DjH1¯¯¯¯¯¯X)=P(Di|H2X)1+P(DjH3|X)P(DjH2|X)+P(Di|H3X)1+P(DjH2|X)P(DjH3|X)
The only way this equation can yield eq.2 is that both denominators equal 1, i.e. both fractions in the denominators must equal zero. But that is impossible.

1
I think the fourth equation is incorrect. We should have P(DiDjHb|X)=P(DiHB|X)P(Dj|HbX)P(DiDjHb|X)=P(DiHB|X)P(Dj|HbX)
probabilityislogic

Thank you very much probabilityislogic, I was able to correct the solution. What do you think now?
astroboy

I just don't understand how Jaynes says: "Those who fail to distinguish between logical independence and causal independence would suppose that (4.29) is always valid".
astroboy

I think I found the answer to my last comment: right after the sentence above Jaynes says: "provided only that no DiDi exerts a physical influence on any other DjDj". So essentially Jaynes is saying that even if they don't have physical influence, there is a logical limitation that doesn't allow the generalization to more than two hypotheses.
astroboy

After reading the text again I feel my last comment was not a good answer. As I understand it now, Jayne's wanted to say: "Those who fail to distinguish between logical independence and causal independence" would argue that DiDi and DjDj are assumed to have no physical influence. Thus they have causal independence which for them implies logical independence over any set of hypotheses. So they find all this discussion meaningless and simply proceed to generalize the binary case.
astroboy

1

Okay, so rather than go and re-derive Saunder's equation (5), I will just state it here. Condition 1 and 2 imply the following equality:

mj=1(kihkdjk)=(kihk)m1(kihkmj=1djk)

j=1m(kihkdjk)=(kihk)m1(kihkj=1mdjk)
where djk=P(Dj|Hk,I)hk=P(Hk|I)
djk=P(Dj|Hk,I)hk=P(Hk|I)

Now we can specialise to the case m=2m=2 (two data sets) by taking D(1)1D1D(1)1D1 and relabeling D(1)2D2D3DmD(1)2D2D3Dm. Note that these two data sets still satisfy conditions 1 and 2, so the result above applies to them as well. Now expanding in the case m=2m=2 we get:

(kihkd1k)(lihld2l)=(kihk)(lihld1ld2l)

(kihkd1k)(lihld2l)=(kihk)(lihld1ld2l)

kilihkhld1kd2l=kilihkhld1ld2l

kilihkhld1kd2l=kilihkhld1ld2l

kilihkhld2l(d1kd1l)=0(i=1,,n)

kilihkhld2l(d1kd1l)=0(i=1,,n)

The term (d1ad1b)(d1ad1b) occurs twice in the above double summation, once when k=ak=a and l=bl=b, and once again when k=bk=b and l=al=a. This will occur as long as a,bia,bi. The coefficient of each term is given by d2bd2b and d2ad2a. Now because there are ii of these equations, we can actually remove ii from these equations. To illustrate, take i=1i=1, now this means we have all conditions except where a=1,b=2a=1,b=2 and b=1,a=2b=1,a=2. Now take i=3i=3, and we now can have these two conditions (note this assumes at least three hypothesis). So the equation can be re-written as:

l>khkhl(d2ld2k)(d1kd1l)=0

l>khkhl(d2ld2k)(d1kd1l)=0

Now each of the hihi terms must be greater than zero, for otherwise we are dealing with n1<nn1<n hypothesis, and the answer can be reformulated in terms of n1n1. So these can be removed from the above set of conditions:

l>k(d2ld2k)(d1kd1l)=0

l>k(d2ld2k)(d1kd1l)=0

Thus, there are n(n1)2n(n1)2 conditions that must be satisfied, and each conditions implies one of two "sub-conditions": that djk=djldjk=djl for either j=1j=1 or j=2j=2 (but not necessarily both). Now we have a set of all of the unique pairs (k,l)(k,l) for djk=djldjk=djl. If we were to take n1n1 of these pairs for one of the jj, then we would have all the numbers 1,,n1,,n in the set, and dj1=dj2==dj,n1=dj,ndj1=dj2==dj,n1=dj,n. This is because the first pair has 22 elements, and each additional pair brings at least one additional element to the set*

But note that because there are n(n1)2n(n1)2 conditions, we must choose at least the smallest integer greater than or equal to 12×n(n1)2=n(n1)412×n(n1)2=n(n1)4 for one of the j=1j=1 or j=2j=2. If n>4n>4 then the number of terms chosen is greater than n1n1. If n=4n=4 or n=3n=3 then we must choose exactly n1n1 terms. This implies that dj1=dj2==dj,n1=dj,ndj1=dj2==dj,n1=dj,n. Only with two hypothesis (n=2n=2) is where this does not occur. But from the last equation in Saunder's article this equality condition implies:

P(Dj|¯Hi)=kidjkhkkihk=djikihkkihk=dji=P(Dj|Hi)

P(Dj|H¯¯¯¯¯i)=kidjkhkkihk=djikihkkihk=dji=P(Dj|Hi)

Thus, in the likelihood ratio we have: P(D(1)1|Hi)P(D(1)1|¯Hi)=P(D1|Hi)P(D1|¯Hi)=1 ORP(D(1)2|Hi)P(D(1)2|¯Hi)=P(D2D3,Dm|Hi)P(D2D3,Dm|¯Hi)=1

P(D(1)1|Hi)P(D(1)1|H¯¯¯¯¯i)=P(D1|Hi)P(D1|H¯¯¯¯¯i)=1 ORP(D(1)2|Hi)P(D(1)2|H¯¯¯¯¯i)=P(D2D3,Dm|Hi)P(D2D3,Dm|H¯¯¯¯¯i)=1

To complete the proof, note that if the second condition holds, the result is already proved, and only one ratio can be different from 1. If the first condition holds, then we can repeat the above analysis by relabeling D(2)1D2D(2)1D2 and D(2)2D3,DmD(2)2D3,Dm. Then we would have D1,D2D1,D2 not contributing, or D2D2 being the only contributor. We would then have a third relabeling when D1D2D1D2 not contributing holds, and so on. Thus, only one data set can contribute to the likelihood ratio when condition 1 and condition 2 hold, and there are more than two hypothesis.

*NOTE: An additional pair might bring no new terms, but this would be offset by a pair which brought 2 new terms. e.g. take dj1=dj2dj1=dj2 as first[+2], dj1=dj3dj1=dj3 [+1] and dj2=dj3dj2=dj3 [+0], but next term must have djk=djldjk=djl for both k,l(1,2,3)k,l(1,2,3). This will add two terms [+2]. If n=4n=4 then we don't need to choose any more, but for the "other" jj we must choose the 3 pairs which are not (1,2),(2,3),(1,3)(1,2),(2,3),(1,3). These are (1,4),(2,4),(3,4)(1,4),(2,4),(3,4) and thus the equality holds, because all numbers (1,2,3,4)(1,2,3,4) are in the set.


I am beginning to doubt the accuracy of this proof. The result in Saunders maths implies only nn non linear constraints on the djkdjk. This makes djkdjk only have nn degrees of freedom instead of 2n. However to get to the n(n1)2 conditions a different argument is required.
probabilityislogic

0

For the record, here is a somewhat more extensive proof. It also contains some background information. Maybe this is helpful for others studying the topic.

The main idea of the proof is to show that Jaynes' conditions 1 and 2 imply that P(Dmk|HiX)=P(Dmk|X),

for all but one data set mk=1,,m. It then shows that for all these data sets, we also have P(Dmk|¯HiX)=P(Dmk|X).
Thus we have for all but one data set, P(Dmk|HiX)P(Dmk|¯HiX)=P(Dmk|X)P(Dmk|X)=1.
The reason that I wanted to include the proof here is that some of the steps involved are not at all obvious, and one needs to take care not to use anything else than conditions 1 and 2 and the product rule (as many of the other proofs implicitly do). The link above includes all these steps in detail. It is on my Google Drive and I will make sure it stays accessible.


Welcome to Cross Validated. Thank you for your answer. Can you please edit you answer to expand it, in order to include the main points of the link you provide? It will be more helpful both for people searching in this site and in case the link breaks. By the way, take the opportunity to take the Tour, if you haven't done it already. See also some tips on How to Answer, on formatting help and on writing down equations using LaTeX / MathJax.
Ertxiem - reinstate Monica

Thanks for your comment. I edited the post and sketched the main steps of the proof.
dennis
আমাদের সাইট ব্যবহার করে, আপনি স্বীকার করেছেন যে আপনি আমাদের কুকি নীতি এবং গোপনীয়তা নীতিটি পড়েছেন এবং বুঝতে পেরেছেন ।
Licensed under cc by-sa 3.0 with attribution required.