Search

 2 of 2 Prev 1 2
Causal decision theory and newcomb’s problem
 Posted: 17 May 2011 01:59 PM [ Ignore ]   [ # 16 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 17 May 2011 11:17 AM

Let me go through this scenario by scenario.

The key assumptions are (1) the predictor’s prediction is not influenced by your actual choice, and (2) it is not changed once made.

First scenario: The predictor has been correct 90% of the time, but that figure does not reflect the probability of it being correct for this trial.
Suppose the predictor predicts the one box.  If you take the one, you will get 1 million.  If you take both, you will get 1 million 1 thousand.
Suppose the predictor predicts both boxes.  If you take the one, you will get zero.  If you take both, you will get 1 thousand.
Therefore, your choice determines your expected value, and the highest expected value comes from taking both boxes.

Second scenario: The predictor has a 100% probability of being correct for this trial.
Suppose the predictor predicts the one box.  If you take the one, you will get one million.  However, because of the key assumptions, it would be impossible for you to take both, as that would mean the predictor predicted wrong.
Suppose the predictor predicts both boxes.  If you take both, you will get 1 thousand.  For the same reasons as before, it would be impossible for you to pick only one in this case.
Therefore, your expected value is completely dependent on what the predictor predicts.  You have no say in the matter.

Third scenario: The predictor has a 90% probability of being correct for this trial.
Suppose the predictor predicts the one box.  Then you have a 90% chance of taking the one for 1 million, and a 10% chance of taking both for 1 million 1 thousand, with an EV of 1 million 1 hundred.
Suppose the predictor predicts both boxes.  Then you have a 90% chance of taking both for 1 thousand, and a 10% chance of taking the one for 0, with an EV of 9 hundred.
Your EV is once again dependent on what the predictor chooses.

Fourth scenario: You are given no information whatsoever on the reliability of the predictor.
Either the predictor determines your EV, or you do.  In all the cases where you determine your EV, it is better to take both boxes.
Therefore, you should try to take both boxes just in case you actually are in control.  If you are not, then you would be controlled by probability and not actually have a choice.

Thanks Mingy Jongo for going through all these scenarios.

I’ll be back once I’ve had time to think.

Stephen

[ Edited: 17 May 2011 02:08 PM by StephenLawrence ]
 Profile

 Posted: 19 May 2011 01:17 PM [ Ignore ]   [ # 17 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 17 May 2011 11:17 AM

Third scenario: The predictor has a 90% probability of being correct for this trial.
Suppose the predictor predicts the one box.  Then you have a 90% chance of taking the one for 1 million, and a 10% chance of taking both for 1 million 1 thousand, with an EV of 1 million 1 hundred.
Suppose the predictor predicts both boxes.  Then you have a 90% chance of taking both for 1 thousand, and a 10% chance of taking the one for 0, with an EV of 9 hundred.
Your EV is once again dependent on what the predictor chooses.

I think this is the best scenario to work with (leaving aside 2)

What are you saying? Are you saying in this scenario my choice makes no difference?

If so why?

In all the cases where you determine your EV, it is better to take both boxes.

I think this is the crux of the matter, better than what? Obviously better than what you would achieve if you took one box.

I think a useful way to think about this is to imagine an observer can see what is in the box. Say the observer sees a million and I select one box.

The observer says “you should have selected both boxes then you would have received 1,000 extra.”

My response is “No because if I had selected two boxes it would be most likely that there was nothing in the opaque box.”

The disagreement between the observer and myself is over which possible world to move to?

He says the one in which the predicter got it wrong.

And I say the one in which there was no money in the opaque box.

Who’s right and why?

Stephen

[ Edited: 19 May 2011 01:23 PM by StephenLawrence ]
 Profile

 Posted: 19 May 2011 11:11 PM [ Ignore ]   [ # 18 ]
Member
Total Posts:  114
Joined  2010-12-03
StephenLawrence - 19 May 2011 01:17 PM
Mingy Jongo - 17 May 2011 11:17 AM

Third scenario: The predictor has a 90% probability of being correct for this trial.
Suppose the predictor predicts the one box.  Then you have a 90% chance of taking the one for 1 million, and a 10% chance of taking both for 1 million 1 thousand, with an EV of 1 million 1 hundred.
Suppose the predictor predicts both boxes.  Then you have a 90% chance of taking both for 1 thousand, and a 10% chance of taking the one for 0, with an EV of 9 hundred.
Your EV is once again dependent on what the predictor chooses.

I think this is the best scenario to work with (leaving aside 2)

What are you saying? Are you saying in this scenario my choice makes no difference?

If so why?

I’m saying that you have no choice at all.  One way to look at it would be to imagine 10 parallel universes: in 9 of those, you would take what was predicted; in the other, you would not.  If it was possible to choose differently in any one of those, then it would be possible to change the overall probability of the predictor being correct for the trial, which is not allowed by the “90% probability” premise.

StephenLawrence - 19 May 2011 01:17 PM

In all the cases where you determine your EV, it is better to take both boxes.

I think this is the crux of the matter, better than what? Obviously better than what you would achieve if you took one box.

I think a useful way to think about this is to imagine an observer can see what is in the box. Say the observer sees a million and I select one box.

The observer says “you should have selected both boxes then you would have received 1,000 extra.”

My response is “No because if I had selected two boxes it would be most likely that there was nothing in the opaque box.”

The disagreement between the observer and myself is over which possible world to move to?

He says the one in which the predicter got it wrong.

And I say the one in which there was no money in the opaque box.

Who’s right and why?

Stephen

It is specified that the predictor’s prediction is not influenced by your actual choice, and it is not changed once made.  Your choice (if you have one) does not magically change the contents of the box.

 Profile

 Posted: 19 June 2011 04:34 AM [ Ignore ]   [ # 19 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 17 May 2011 11:17 AM

It is specified that the predictor’s prediction is not influenced by your actual choice, and it is not changed once made.  Your choice (if you have one) does not magically change the contents of the box.

Yes, this is a good argument for taking both boxes.

Still, what would I do? Take one box because I “intuitevely reckon” it’s the best thing to do and I would trust that judgement over the reasoning you are giving. My intuitions are based on the fact that one boxers get richer than two boxers and that overrules all else.

So perhaps I’m just being irrational but what makes the puzzle interesting is perhaps not and if I were to answer why not it would tell us something about choice making, possible worlds and causation.

Stephen

 Profile

 Posted: 30 December 2011 11:55 PM [ Ignore ]   [ # 20 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 19 May 2011 11:11 PM

It is specified that the predictor’s prediction is not influenced by your actual choice, and it is not changed once made.  Your choice (if you have one) does not magically change the contents of the box.

My choice doesn’t change the contents of the box but by picking one box I do raise the probability of there being £1,000,000 in there.

So I would try to make the case that it is rational to do that.

Stephen

 Profile

 Posted: 31 December 2011 01:04 PM [ Ignore ]   [ # 21 ]
Sr. Member
Total Posts:  475
Joined  2008-03-08

I haven’t gone through the posts and maybe this has been brought up, but if the predictions are absolutely accurate, then there’s no possibility of leaving the table with more than \$1MM. Doesn’t seem realistic, but that would be the implication if the prediction were 100% accurate.

 Profile

 Posted: 01 January 2012 01:23 AM [ Ignore ]   [ # 22 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Kaizen - 31 December 2011 01:04 PM

I haven’t gone through the posts and maybe this has been brought up, but if the predictions are absolutely accurate, then there’s no possibility of leaving the table with more than \$1MM. Doesn’t seem realistic, but that would be the implication if the prediction were 100% accurate.

The original puzzle was of a form close to that and yet still people advocate taking both boxes.

However, the original discussion by Nozick says only that the Predictor’s predictions are “almost certainly” correct, and also specifies that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

http://en.wikipedia.org/wiki/Almost_surely

In probability theory, one says that an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one

In this thread we’ve been thinking about if the predictor is 90% accurate.

Stephen

 Profile

 Posted: 02 January 2012 06:12 AM [ Ignore ]   [ # 23 ]
Sr. Member
Total Posts:  5255
Joined  2007-08-31

Stephen,

It still remembers me of a discussion in the free will thread, about the prediction machine. I introduced it here.

I assume this is the reason Nozick stated ‘it is nearly always right’. Making ‘perfect predictions’, in this case what box(es) you will choose, would cause an endless loop. But then the prediction machine can do nothing else than arbitrary break off the loop. But there is no reason to think that the outcome will converge to your choice, so I have reason to believe that the machine cannot predict what I will do at all. Only when the machine does not interfere with the part of the universe it is predicting, it can be 100 reliable. But is there reason to believe that a prediction machine that is nearly always right, does better than a machine that is always right in situations where it does not interfere with the situation at hand?

I think there is a relation with the liar’s paradox: if somebody says ‘I am always lying’, then I know at least this statement is a lie. He sometimes speaks the truth. The liar’s paradox is a logical paradox, not a real one. (If you do not believe me, then please say ‘I am always lying’. And, did you succeed? And was it true?).

For more fun, see the Unexpected hanging paradox.

Signature

GdB

The light is on, but there is nobody at home.

 Profile

 Posted: 05 January 2012 08:16 PM [ Ignore ]   [ # 24 ]
Sr. Member
Total Posts:  475
Joined  2008-03-08
StephenLawrence - 01 January 2012 01:23 AM

In this thread we’ve been thinking about if the predictor is 90% accurate.

Stephen

Here’s my take on it:

In a case where the prediction is 100% accurate, it would be logically contradictory to be able to leave the table with more than \$1MM. If you can leave with more than \$1MM, then the prediction necessarily is not 100% accurate. If the prediction is from some fortune teller and is 100% accurate, then there’s no real choice in the matter. If the prediction is based on someone from the future who knows the answer ahead of time, then we’re confronted with the Grandfather Paradox and the problems and possible solutions it entails. If the new knowledge from the future allows us to change our supposed fated action, then the choice is causally connected to the “prediction” (though presumably the guy would have come from some parallel universe or possibly Canada) and there’s no dilemma.

If the prediction is said to be “almost sure” (as used in probability theory), we have a more interesting problem. The confusion comes from keeping track of what exactly the causal factors are in making the prediction, I think. If it’s true that the agent’s choice does not causally affect what’s in the boxes, then calculating the expected utility based on conditional probabilities of the reliability of the prediction is where the mistake is made since the prediction is causally affected by other factors.

Expected utility seems to be premised on a causal relationship in an act-state pair. As pointed out, there is no causal relationship between the act of one-boxing or two-boxing and the state of there being \$M in the opaque box or not in the Newcomb Problem. So using expected utility in this case seems inappropriate. It would be appropriate in the case where the act causally affects the prediction and consequently affects whether the moolah is in the opqaue box (the state). This would presumably occur sometime before the the choice is offered. Not that it would matter if the prediction was made at some unspecified point in the future where the choice was unknown to the predictor until after the “prediction” was made and the prediction wasn’t affected by the choice in any way. If that crap above is the case, then it is always more rational to “two-box.”

[ Edited: 28 January 2012 03:17 AM by Kaizen ]
 Profile

 2 of 2 Prev 1 2