¿Por qué "explicar" tiene sentido intuitivo?


36

Recientemente aprendí sobre un principio de razonamiento probabilístico llamado " explicar lejos ", y estoy tratando de captar una intuición para ello.

Déjame configurar un escenario. Sea A el evento de que ocurra un terremoto. Que el evento B sea ​​el evento de que el gigante verde alegre pasee por la ciudad. Sea C el evento de que el suelo tiembla. Deje AB . Como se ve, ya seaA oB pueden causarC .

Uso el razonamiento "explicar", si ocurre C , uno de P(A) o P(B) aumenta, pero el otro disminuye ya que no necesito razones alternativas para explicar por qué ocurrió CSin embargo, mi intuición me dice que la corriente que tanto P(A) y P(B) debería aumentar si C se produce desde C que ocurre hace que sea más probable que alguna de las causas de C se produjo.

How do I reconcile my current intuition with the idea of explaining away? How do I use explaining away to justify that A and B are conditionally dependent on C?


4
What does AB mean?
mark999

2
It means that A is independent of B, sorry.
David Faux

1
I would choose a scenario that is less likely to confuse you. "Shaking ground" could be the cause or the effect of "earthquake", and both are probably caused by the "green giant strolling". For explaining away to happen, both A and B must be causes of C.
Neil G

@DavidFaux you don't need to apologize. That is standard mathematical notation for stating the independence of variables. Btw, kudos on the good question and also +1 because the link you provide is really really good! I had been confused about all those concepts and that article you provided is really good. Thanks! :)
Charlie Parker

Respuestas:


39

Clarification and notation

if C occurs, one of P(A) or P(B) increases, but the other decreases

This isn't correct. You have (implicitly and reasonably) assumed that A is (marginally) independent of B and also that A and B are the only causes of C. This implies that A and B are indeed dependent conditional on C, their joint effect. These facts are consistent because explaining away is about P(A | C), which is not the same distribution as P(A). The conditioning bar notation is important here.

However, my current intuition tells me that both P(A) and P(B) should increase if C occurs since C occurring makes it more likely that any of the causes for C occurred.

You are having the 'inference from semi-controlled demolition' (see below for details). To begin with, you already believe that C indicates that either A or B happened so you can't get any more certain that either A or B happened when you see C. But how about A and B given C? Well, this possible but less likely than either A and not B or B and not A. That is the 'explaining away' and what you want the intuition for.

Intuition

Let's move to a continuous model so we can visualise things more easily and think about correlation as a particular form of non-independence. Assume that reading scores (A) and math scores (B) are independently distributed in the general population. Now assume that a school will admit (C) a student with a combined reading and math score over some threshold. (It doesn't matter what that threshold is as long as it's at least a bit selective).

Here's a concrete example: Assume independent unit normally distributed reading and math scores and a sample of students, summarised below. When a student's reading and math score are together over the admission threshold (here 1.5) the student is shown as a red dot.

explaining away as a collider relationship

Because good math scores offset bad reading scores and vice versa, the population of admitted students will be such that reading and math are now dependent and negatively correlated (-0.65 here). This is also true in the non-admitted population (-0.19 here).

So, when you meet an randomly chosen student and you hear about her high math score then you should expect her to have gotten a lower reading score - the math score 'explains away' her admission. Of course she could also have a high reading score -- this certainly happens in the plot -- but it's less likely. And none of this affects our earlier assumption of no correlation, negative or positive, between math and reading scores in the general population.

Intuition check

Moving back to a discrete example closer to your original. Consider the best (and perhaps only) cartoon about 'explaining away'.

semi-controlled demolition

The government plot is A, the terrorist plot is B, and treat the general destruction as C, ignoring the fact there are two towers. If it is clear why the audience are being quite rational when they doubt the speaker's theory, then you understand 'explaining away'.


3
I think the examples are most unfortunate, while maths and reading ability are stated as being assumed to be independent, this is probably not actually the case, which leads to some confusion with respect to the later use of the term "fact".
Robert Jones

I think a better example would be the case of a person, who could have eaten a pound of something which could have been potatoes or sausages. If that person had not put on weight during the period of the experiment then the probability of having consumed either potatoes or sausages would be less than if the person had put on weight.
Robert Jones

Obviously, that person could have instead eaten something else and to confuse the issue further may also have been to the lavatory, so clearly there is a need to be prepared to look elsewhere for explanations.
Robert Jones

@RobertJones, the example I was given in class was "brainy" and "sporty" as admission criteria.
gwg

1
As I understand it, mental and physical fitness are generally considered to be correlated.
Robert Jones

26

I think your intuition is ok but your understanding of "explain away" reasoning is wrong.

In the article you linked to

"Explaining away" is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes

(emphasis added)

This is quite different from your:

I use "explain away" reasoning, if C occurs, one of P(A) or P(B) increases, but the other decreases since I don't need alternative reasons to explain why C occurred.

You don't just need C to occur it also needs to have been explained away by confirmation of A or B before you reduce the probability of the other possible explanation

Think of it another way. The ground is shaking. You observe B, the giant is wandering around. This explains away C, so it seems unlikely that there is now an earthquake - you settle for the giant explanation. But observing the giant was key - until you had this as the likely explanation of the earthquake, nothing had been explained away. When all you had was C, in fact both P(A|C) and P(B|C) are > P(A) and P(B) respectively, as per @Glen_b's answer.


+1 for a couple of the other answers but I think not emphasis put on what I think is the OP's misreading of "explaining away".
Peter Ellis

+1: Concise and to the point. You might want to also point out that OP might also be missing that A and B must be causes of C.
Neil G

5

In the absence of specific additional information that changes the conditional probability of A or B, Bayes rule tells you

P(A|C)=P(C|A)P(A)P(C) and similarly for P(B|C)

If P(C|A)P(C) and P(C|B)P(C) are both bigger than 1 (which you'd expect if the word 'explanation' is really to mean anything), then both A and B will be more conditionally more probable than they were before C was observed.

It will be of interest to see if one becomes relatively more likely after observing C compared to before.

P(A|C)P(B|C)=P(C|A)P(A)P(C|B)P(B)

That is, the relative probability of the two after observing C is the relative probability before (P(A)/P(B)) times the ratio of the conditional probabilities of observing C given the two 'explanations'.


2

You're asking for intuition. What does it mean that A and B are independent? It means that if I tell you that I've just seen the monster, your opinion about the occurrence or not of the earthquake doesn't change; and conversely. If you think that both P(CA) and P(CB) are high, and I tell you that the ground is shaking and there is no monster in the town, wouldn't that change your opinion about the occurrence of the earthquake, making it more probable?


2

From the linked abstract, it appears that "explaining away" is discussing a learning mechanism, a common way that humans reason, not a formal method of logic or probability. It's a human-like way of reasoning that's not formally correct, just as inductive reasoning is not formally correct (as opposed to deductive reasoning). So I think the formal logic and probability answers are very good, but not applicable. (Note that the abstract is in a Machine Intelligence context.)

Your giants example is very good for this. We believe that earthquakes or giants can cause the ground to shake. But we also believe that giants do not exist -- or are extremely unlikely to exist. The ground shakes. We will not investigate whether a giant is walking around, but rather we'll inquire as to whether an earthquake happened. Hearing that an earthquake did in fact happen, we are even more convinced that earthquakes are an adequate explanation of shaking ground and that giants are even more certain not to exist or are at least even more highly unlikely to exist.

We would only accept that a giant caused the ground to shake only if: 1) we actually witnessed the giant and were willing to believe that we were not being fooled and that our previous assumption that giants were highly-unlikely or impossible was wrong, or 2) we could totally eliminate the possibility of an earthquake and also eliminate all possibilities D, E, F, G, ... that we previously had not thought of but that now seem more likely than a giant.

In the giant case, it makes sense. This learning mechanism (an explanation we find likely becomes even more likely and causes other explanations to become less likely, each time that explanation works) is reasonable in general, but will burn us, too. For example, the ideas that the earth orbits the sun, or that ulcers are caused by bacteria had a hard time gaining traction because of "explaining away", which in this case we'd call confirmation bias.

The fact that the abstract is in a Machine Intelligence setting also makes me thing this is discussing a learning mechanism commonly used by humans (and other animals, I imagine) that could benefit learning systems even though it can also be highly flawed. The AI community tried formal systems for years without getting closer to human-like intelligence and I believe that pragmatics has won out over formalism and "explaining away" is something that we do and thus that AI needs to do.


1

I think an easier way to think of it is: If there is any variable C (0<P(C)<1) such that the occurrence of C increases the probability of both A and B, then A and B cannot be independent. In your example, you actually chose variables that you intuitively understand to be dependent, not independent. That is, the event that there is an earthquake and a giant stomping around aren't independent, since they both are more likely to occur when the floor is shaking. Here is another example: Let C be the event that it rains, and A be the event that you use an umbrella, and B the event that you wear rainboots. Clearly A and B are not independent because when C occurs, you are more likely to both wear galoshes and carry and umbrella. But if you lived in an area that never, ever rained, then A and B could potentially be independent--neither the umbrella nor galoshes are being used as rain gear, so perhaps you wear the galoshes in the garden and use the umbrella to catch fish. They are only able to be independent because they don't share a cause.

Here is a proof: Suppose A and B are independent and also conditionally independent given C.

  1. P(AB)=P(A)P(B)=P(A|C)P(B|C)P(C)2 since A is independent of B
  2. P(AB)=P(AB|C)P(C)=P(A|C)P(B|C)P(C) since A is cond. independent of B given C.

It follows from 1 and 2 that P(C)=P(C)2 hence P(C)=0 or P(C)=1.


I think the OP is wondering how to understand A and B being marginally independent but dependent conditional on C, not how to understand A and B being marginally dependent but independent conditional on C.
conjugateprior
Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.