Search

 1 of 2 1 2 Next
Causal decision theory and newcomb’s problem
 Posted: 15 May 2011 07:27 AM [ Ignore ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20

Firstly this is not another free will thread.

I’ve been reading from this: http://plato.stanford.edu/entries/decision-causal/ and it’s intriguing.

For anyone not familier with Newcomb’s problem it’s explained in the link.

What intrigues me is it seems simply obvious that choosing one box is the rational choice and yet apparently causal decision theory says picking both boxes is the rational choice.

So what’s going on?

From the article:

Some theorists hold that one-boxing is plainly rational if the prediction is completely reliable. They maintain that if the prediction is certainly accurate, then choice reduces to taking \$M or taking \$T. This view oversimplifies.

That’s the view I take and don’t understand why it “oversimplifies.”

The answer given is:

If an agent one-boxes, then that act is certain to yield \$M. However, the agent still would have done better by taking both boxes.

But why? In all possible worlds all things being equal the agent does worse picking two boxes.

Doesn’t this mean it’s true to say the agent would do worse?

Stephen

 Profile

 Posted: 15 May 2011 02:21 PM [ Ignore ]   [ # 1 ]
Member
Total Posts:  114
Joined  2010-12-03

If the prediction has already been made and cannot be changed, then it seems obvious to me that taking both boxes is the better choice.  However, if there is a chance that by your very choosing the contents will change, then taking only the mystery box may lead to a higher value.

Note that if the predictor is indeed 100% reliable, then the person choosing between the boxes cannot actually have a choice in the matter if the prediction was made beforehand.

[ Edited: 15 May 2011 02:29 PM by Mingy Jongo ]
 Profile

 Posted: 15 May 2011 02:59 PM [ Ignore ]   [ # 2 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 15 May 2011 02:21 PM

If the prediction has already been made and cannot be changed, then it seems obvious to me that taking both boxes is the better choice.  However, if there is a chance that by your very choosing the contents will change, then taking only the mystery box may lead to a higher value.

There is no chance the contents can change.

Note that if the predictor is indeed 100% reliable, then the person choosing between the boxes cannot actually have a choice in the matter if the prediction was made beforehand.

Well lets say 97% reliable and you play the game 100 times.

Why aren’t I right to say the probability is you’ll be worse off 97 out of 100 times?

Stephen

 Profile

 Posted: 15 May 2011 04:00 PM [ Ignore ]   [ # 3 ]
Member
Total Posts:  114
Joined  2010-12-03
StephenLawrence - 15 May 2011 02:59 PM
Mingy Jongo - 15 May 2011 02:21 PM

If the prediction has already been made and cannot be changed, then it seems obvious to me that taking both boxes is the better choice.  However, if there is a chance that by your very choosing the contents will change, then taking only the mystery box may lead to a higher value.

There is no chance the contents can change.

Note that if the predictor is indeed 100% reliable, then the person choosing between the boxes cannot actually have a choice in the matter if the prediction was made beforehand.

Well lets say 97% reliable and you play the game 100 times.

Why aren’t I right to say the probability is you’ll be worse off 97 out of 100 times?

Stephen

I just realized that it depends on what is meant by “x% reliable”.  Does it just describe the predictor’s track record (as in it has been correct x% of the time), or does it describe the probability of it being correct for this specific trial?  If it is only the former, then it is better to take both boxes.

However, if the latter is the case, then I would say you still have no choice in the matter, as you would be pretty much a slave to the quantum dice.  Which goes to the topic of what “free will” actually means…

 Profile

 Posted: 15 May 2011 10:38 PM [ Ignore ]   [ # 4 ]
Member
Total Posts:  143
Joined  2008-09-27

this doesn’t seem right:

“I call it expected utility because a person by mistake may attach more or less utility to a bet than its expected utility warrants. “

should the last “expected” be deleted?

 Profile

 Posted: 15 May 2011 10:43 PM [ Ignore ]   [ # 5 ]
Member
Total Posts:  143
Joined  2008-09-27

” They each do better if they each act cooperatively than if they each act uncooperatively. However, each does better if he acts uncooperatively, no matter what the other does.”

eek!  once again, sounds to me like he isn’t writing carefully enough to say what he means.  Am i just tired?

 Profile

 Posted: 15 May 2011 10:51 PM [ Ignore ]   [ # 6 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 15 May 2011 04:00 PM

I just realized that it depends on what is meant by “x% reliable”.  Does it just describe the predictor’s track record (as in it has been correct x% of the time), or does it describe the probability of it being correct for this specific trial?  If it is only the former, then it is better to take both boxes.

The idea is that he really can predict with a high degree of accuracy and he does it by calculating, not some kinda magical foretelling.

However, if the latter is the case, then I would say you still have no choice in the matter, as you would be pretty much a slave to the quantum dice.  Which goes to the topic of what “free will” actually means…

Which is forbidden.

Stephen

 Profile

 Posted: 15 May 2011 11:21 PM [ Ignore ]   [ # 7 ]
Member
Total Posts:  143
Joined  2008-09-27

(good call)

 Profile

 Posted: 15 May 2011 11:23 PM [ Ignore ]   [ # 8 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 15 May 2011 04:00 PM

I just realized that it depends on what is meant by “x% reliable”.  Does it just describe the predictor’s track record (as in it has been correct x% of the time), or does it describe the probability of it being correct for this specific trial?  If it is only the former, then it is better to take both boxes.

I think this is interesting because I wonder why it should make a difference?

Let’s say he has a very long track record. Why doesn’t that amount to the same thing from the principle of induction?

Stephen

 Profile

 Posted: 16 May 2011 12:16 AM [ Ignore ]   [ # 9 ]
Member
Total Posts:  143
Joined  2008-09-27

this site posits it as “Now you have grounds to believe that in such matters, the Newcomb Being has a success rate on predictions of about 90%.”  Still kinda vague…

 Profile

 Posted: 16 May 2011 12:29 AM [ Ignore ]   [ # 10 ]
Member
Total Posts:  143
Joined  2008-09-27

You Choose   Box A Contains:  Box B Contains:  Expected Value
Only A   .9 x \$1,000,000   \$1000   \$900,000
Both Boxes   .1 x \$1,000,000   \$1000   \$101,000

Anyway, my initial reaction was a sort of “lazy elan”, or a “lazy unselfishness”.  It wasn’t that i thought that taking both boxes would actually hurt my chances at box A, it was just that i felt like it was more graceful to just not worry about a thousand when a million was at stake… and anyway, it’s just money changing hands, not something of intrinsic value being created or destroyed, so just taking A, to a degree, seems amost “enlightened”.

It would be more interesting if a moral component were added—if there were something really “greedy” in the immoral sense about taking both… whereas here it’s only greedy in an amoral sense.

Is it better, in general, to be the kind of person who takes both boxes?  The answer doesn’t really depend at all on this one, rather odd situation.

 Profile

 Posted: 16 May 2011 12:30 AM [ Ignore ]   [ # 11 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
isaac - 16 May 2011 12:16 AM

this site posits it as “Now you have grounds to believe that in such matters, the Newcomb Being has a success rate on predictions of about 90%.”  Still kinda vague…

http://www.greylabyrinth.com/puzzle/puzzle014

My view is it doesn’t matter. We need to take the success rate as the probability of him being right to do the puzzle.

Then all we need to do is imagine having 100 goes picking both boxes. We’d get \$1,000 90 times and \$1,001,000 10 times, much less than if we picked both boxes everytime.

But I also think because it doesn’t matter (unless someone can show it does) lets simplify, as the puzzle in it’s original form does:

However, the original discussion by Nozick says only that the Predictor’s predictions are “almost certainly” correct, and also specifies that “what you actually decide to do is not part of the explanation of why he made the prediction he made”.

http://en.wikipedia.org/wiki/Almost_surely

In probability theory, one says that an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one

Stephen

 Profile

 Posted: 16 May 2011 08:25 AM [ Ignore ]   [ # 12 ]
Member
Total Posts:  114
Joined  2010-12-03
StephenLawrence - 15 May 2011 11:23 PM
Mingy Jongo - 15 May 2011 04:00 PM

I just realized that it depends on what is meant by “x% reliable”.  Does it just describe the predictor’s track record (as in it has been correct x% of the time), or does it describe the probability of it being correct for this specific trial?  If it is only the former, then it is better to take both boxes.

I think this is interesting because I wonder why it should make a difference?

Let’s say he has a very long track record. Why doesn’t that amount to the same thing from the principle of induction?

Stephen

Because induction is fallible, and deduction is not.  In the latter scenario, you do not have a choice; however, in the case that you only know the percentage reflects the track record, there is a chance, however small, that you are actually “in control”, and should take both boxes.

 Profile

 Posted: 16 May 2011 08:37 AM [ Ignore ]   [ # 13 ]
Member
Total Posts:  114
Joined  2010-12-03

My view is it doesn’t matter. We need to take the success rate as the probability of him being right to do the puzzle.

Then all we need to do is imagine having 100 goes picking both boxes. We’d get \$1,000 90 times and \$1,001,000 10 times, much less than if we picked both boxes everytime.

There is a problem with that.  If “what you actually decide to do is not part of the explanation of why he made the prediction he made” is the case, then the expected value is not dependent on on what you choose, but what the predictor chooses.  It would be more like this:

If the predictor predicts the single box 100 times and is 90% correct, then 90 people will end up with \$1,000,000 and 10 people will get 1,001,000.  Likewise, if the predictor predicts both boxes 100 times, then 90 people will end up with \$1000 and 10 with \$0.

 Profile

 Posted: 16 May 2011 11:09 PM [ Ignore ]   [ # 14 ]
Sr. Member
Total Posts:  6851
Joined  2006-12-20
Mingy Jongo - 16 May 2011 08:37 AM

My view is it doesn’t matter. We need to take the success rate as the probability of him being right to do the puzzle.

Then all we need to do is imagine having 100 goes picking both boxes. We’d get \$1,000 90 times and \$1,001,000 10 times, much less than if we picked both boxes everytime.

There is a problem with that.  If “what you actually decide to do is not part of the explanation of why he made the prediction he made” is the case, then the expected value is not dependent on on what you choose, but what the predictor chooses.  It would be more like this:

If the predictor predicts the single box 100 times and is 90% correct, then 90 people will end up with \$1,000,000 and 10 people will get 1,001,000.  Likewise, if the predictor predicts both boxes 100 times, then 90 people will end up with \$1000 and 10 with \$0.

I’m still thinking about that.

It seems to me that if this actually happened to you, you would pick one box because you’d be very sure that if you didn’t you’d be worse off. Or not? And even if you started off picking two boxes you’d soon think “sod this for a game of soldiers” and switch in trial and error fashion and as the money kept rolling in, keep going.

And that’s the puzzle, why does what looks like the rational choice to you and to most experts on this, lose you money in practice?

As David Lewis put it “Why aint ya getting richer.”

There are two options I can think of.

1) Two boxes isn’t the rational choice

2) It is but this is a special case in which being rational wouldn’t work in practice.

Stephen

[ Edited: 16 May 2011 11:17 PM by StephenLawrence ]
 Profile

 Posted: 17 May 2011 11:17 AM [ Ignore ]   [ # 15 ]
Member
Total Posts:  114
Joined  2010-12-03

Let me go through this scenario by scenario.

The key assumptions are (1) the predictor’s prediction is not influenced by your actual choice, and (2) it is not changed once made.

First scenario: The predictor has been correct 90% of the time, but that figure does not reflect the probability of it being correct for this trial.
Suppose the predictor predicts the one box.  If you take the one, you will get 1 million.  If you take both, you will get 1 million 1 thousand.
Suppose the predictor predicts both boxes.  If you take the one, you will get zero.  If you take both, you will get 1 thousand.
Therefore, your choice determines your expected value, and the highest expected value comes from taking both boxes.

Second scenario: The predictor has a 100% probability of being correct for this trial.
Suppose the predictor predicts the one box.  If you take the one, you will get one million.  However, because of the key assumptions, it would be impossible for you to take both, as that would mean the predictor predicted wrong.
Suppose the predictor predicts both boxes.  If you take both, you will get 1 thousand.  For the same reasons as before, it would be impossible for you to pick only one in this case.
Therefore, your expected value is completely dependent on what the predictor predicts.  You have no say in the matter.

Third scenario: The predictor has a 90% probability of being correct for this trial.
Suppose the predictor predicts the one box.  Then you have a 90% chance of taking the one for 1 million, and a 10% chance of taking both for 1 million 1 thousand, with an EV of 1 million 1 hundred.
Suppose the predictor predicts both boxes.  Then you have a 90% chance of taking both for 1 thousand, and a 10% chance of taking the one for 0, with an EV of 9 hundred.
Your EV is once again dependent on what the predictor chooses.

Fourth scenario: You are given no information whatsoever on the reliability of the predictor.
Either the predictor determines your EV, or you do.  In all the cases where you determine your EV, it is better to take both boxes.
Therefore, you should try to take both boxes just in case you actually are in control.  If you are not, then you would be controlled by probability and not actually have a choice.

[ Edited: 17 May 2011 11:20 AM by Mingy Jongo ]
 Profile

 1 of 2 1 2 Next