# The Frechet Probability Bounds — Super Wonky!

I recently ran across a problem using the Frechet probability bounds. The bounds weren’t immediately obvious to me and Google didn’t enlighten so I wrote an intuitive explanation. Super wonkiness to follow. You have been warned.

Suppose that we have two events A and B which occur with probability P(A) and P(B). We are interested in saying something about the joint event P(A∩B), the probability that both events occur, but we don’t know anything about whether the events are correlated or independent. Can we nevertheless say something about the joint event P(A∩B)? We can. The Frechet bounds or inequalities tell us:

max[0,P(A)+P(B)-1] ≤ P(A∩B) ≤ min[P(A),P(B)]

In words, the probability of the joint event can’t be smaller than max[0,P(A)+P(B)-1] or bigger than min[P(A),P(B)]. Let’s give an intuitive explanation.

The events themselves are not important only the probabilities so let’s use an intuitive model for the events. Let x be a random number distributed between 0 and 1 with each number equally likely (i.e. a uniform distribution). Suppose that the event A occurs whenever x is in some region between a and b. For example, we might say that event A occurs if x is between .4 and .6 and event B occurs if x is between .1 and .7. Notice also that since any number between 0 and 1 is equally likely the probability that event A occurs is just the width of the A region, b-a. In this case, for example, P(A)=.6-.4=.2 and similarly P(B)=.7-.1=.6.

Now suppose that a demon arranges the events A and B to minimize the probability that both events occur. What is the smallest the demon could make P(A∩B)? To minimize the probability that both A and B occur we can think of the demon as placing the two events “as far apart” as possible. For example, the demon will begin one event at 0 and define it as the region moving right until the width is equal to P(A) and the other event the demon will begin at 1 and move left until the width is equal to P(B). If P(A)=.2 and P(B)=.4, for example, then the demon will define the events so that A∈[0,.2] and B∈[.6,1]. Here’s a picture. Notice that by beginning one event at 0 and the other at 1 the demon minimizes the overlap which is the probabiity that both events occur.

In this case, the events don’t overlap at all and so the demon has ensured that the probability of P(A∩B)=0.

But what if the two events must overlap? Suppose for example that P(A)=.5 and P(B)=.7 then the demon’s logic of minimization leads to the following picture:

We can see from the picture that the overlap is the region between .3 and .5 which we know has probability .5-.3=0.2. Thus we have discovered that if P(A)=.5 and P(B)=.7 then the joint probability has to be at least .2, i.e. .2≤P(A∩B).

Now let’s generalize. The probability of any event occuring is the length of its region b-a but since the demon always begins the event A at 0 then the probability of P(A) is simply b the rightmost point of the A region. Thus, on the diagram the rightmost point of the A region is labelled P(A) (.5 in this case). Also since the demon always begins the B region at 1, P(B)=1-a′ where a′ is the leftmost point of the B region so rearranging we have that the leftmost point a′=1-P(B) as shown in the picture at .3. Thus, we can read immediately from the figure that the overlap region has length P(A)-(1-P(B)). Notice that if P(A)<1-P(B) then there is no overlap and the difference is a negative number. Rearranging slightly, the length of the overlap region is P(A)+P(B)-1. So now we have two cases, either there is no overlap at all in which case P(A∩B)=0 or there is overlap and P(A∩B)=P(A)+P(B)-1. So putting it all together we have proven that:

max[0,P(A)+P(B)-1] ≤ P(A∩B)

The Frechet bounds also say:

P(A∩B) ≤ min[P(A),P(B)]

This is much easier to show. If minimization is accomplished by minimizing the overlap then maximization is accomplished by maximizing the overlap. To maximize the joint probability the demon starts both regions at the same point, say at the 0 point, so the picture looks like this:
The joint probability is then the region of overlap. But the regions can’t overlap by more than the smallest of the two regions! In this case the A region is smaller than the B region and since the regions start at 0 we have P(A∩B)≤P(A). Generalizing we have that:

P(A∩B) ≤ min[P(A),P(B)]

Thus we have proven the Frechet bounds:

max[0,P(A)+P(B)-1] ≤ P(A∩B) ≤ min[P(A),P(B)]

Addendum: Here is Heckman on the theory of Frechet Bounds and Heckman and Smith and Heckman, Smith and Clements with applications.

Is this so special? The lower bound is immediate if you observe that
p(A and B)= p(A) + p(B) - p( A union B), and p(A union B) <= 1.

The upper bound is even easier: write p(A, B) = p(A) p(B|A) = p(B) p(A|B) and note that both conditional probabilities are bounded by 1.

Better switch over to salon and the left-wing bastions of statistics where P(cause of death = homicide | woman died at work) > P(cause of death = homicide | man died at work) implies women more likely to be killed in a homicide at work than men.

From the eyes of a mathematician, this is adorable.

+1
Nation's best libertarians!

From the eyes of an economist a mathematicians explanations of the world or human behaviour also look very adorable....

The upper bound is even easier to prove: If E_1 is a subset of E_2, then P(E_1) \leq P(E_2).

The probability of A and B both occurring is less than the probability of A occurring. Then reverse roles of A and B. Get: P( A and B ) = P(A) + P(B) - 1. Any probability is tautologically bigger than zero, as well. QED.

LOL Wordpress ate my proof! No wonder I can't follow half the arguments here!

I really liked this explanation. My stochastic class used to give me a headache but this method is visual and refreshing.

It seems to me that your "proof" applies only to the special case where the two distributions are uniform. The proof in the wikipedia article you link to is shorter, simpler, and more general.

There is an important difference between proof and explanation.

Yes and because of that I would not have written: "So putting it all together we have proven that:".

No, the proof doesn't rely on the underlying events being uniform that is just a convenient transformation.

Prove it.

Let me rephrase that. With some effort, it may indeed be possible to generalize the argument that you have given for two uniform distributions in one dimension to arbitrary distributions in an arbitrary number of dimensions. But you haven't done that and it is far from obvious, based just on what you have shown, how to do it (admittedly I haven't given it any thought, but if you have to think about it, it is not obvious). If you generalized to arbitrary distributions in arbitrary dimensions however, I suspect things would no longer be simple. You wouldn't for example be able to write something like P(B)=1-a′ which depends on the distribution being uniform and one dimensional. By contrast, the wikipedia proof is completely general, is very simple, and relies on a couple of simple well known facts about probability distributions. I suspect you could do the whole proof with a couple of Venn diagrams.

The diagrams aren't of PDFs, they are just showing the probability of an event graphically. So you don't have a 1D PDF. Really the only argument he is making is that if P(A) + P(B) > 1, there must be some overlap, and that overlap is at least as big as P(A) + P(B) - 1.

Exactly.

I didn't say anything about the diagrams being pdfs. My statement that Alex's proof is not general was directed at statements like this: "The probability of any event occuring is the length of its region b-a but since the demon always begins the event A at 0 then the probability of P(A) is simply b the rightmost point of the A region". Here and in other places in his proof he is explicitly assuming a 1-d uniform distribution. But this topic scarcely justifies more words being spent on it.

I actually came here to say that all you need for a proof is to draw the two diagrams (maximum and minimum overlap), but apparently there are people for whom all those words still aren't enough.

Is this exposition of undergrad probability a Straussian joke at the expense of Krugman's "wonky" posts?

To Tabarrok, this is deep math. To the rest of us, it is a trivial result we learned in undergraduate probability theory.

Why so rude? It's an intuitive proof designed for undergrad probability theory.

To the rest of us,

You seem to have a very odd idea of what the great majority of people here took in college.

And you are reading his blog and not the other way around.

Hmm.

You guys post a fair amount of complicated stuff, so I girded myself up for super wonky! And now I can't help but feel that my chain has been yanked.

Wonky doesn't mean difficult it means of interest only to wonks!

Thanks, I enjoyed this entry. Thanks to commenters who gave alternative explanations of Frechet as well.

I was hoping for at least a chebychev or markov inequality.

Howdy very nice blog!! Man .. Beautiful .. Superb .. I will bookmark your web site and take the feeds also?I am glad to find numerous useful information here within the submit, we want work out extra techniques on this regard, thank you for sharing. . . . . .

Comments for this post are closed