wu :: forums
« wu :: forums - Newcomb's Dilemma »

Welcome, Guest. Please Login or Register.
May 18th, 2024, 1:15am

RIDDLES SITE WRITE MATH! Home Home Help Help Search Search Members Members Login Login Register Register
   wu :: forums
   riddles
   hard
(Moderators: SMQ, Eigenray, towr, william wu, Icarus, Grimbal, ThudnBlunder)
   Newcomb's Dilemma
« Previous topic | Next topic »
Pages: 1 2 3 4  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Newcomb's Dilemma  (Read 9512 times)
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #25 on: Jun 12th, 2005, 1:36am »
Quote Quote Modify Modify

This 'dilemma' just constitutes yet another proof that the laws of physics must follow causality:
 
(A) = a physical device can be constructed that is capable of predicting all my actions
 
(B) = the choice that optimises my gain will give me less
 
if (A) then (B)
(B) = false
then (A) = false
 
 
I really don't see a paradox nor a dilemma here.
 
« Last Edit: Jun 12th, 2005, 10:57am by JocK » IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
Icarus
wu::riddles Moderator
Uberpuzzler
*****



Boldly going where even angels fear to tread.

   


Gender: male
Posts: 4863
Re: Newcomb's Dilemma  
« Reply #26 on: Jun 13th, 2005, 4:51pm »
Quote Quote Modify Modify

on Jun 11th, 2005, 7:44pm, andrewc32569 wrote:
I would choose B BECAUSE The machine would most likely predict me to choose A.
 
 
Here is the reason:Before you meet up with the rich guy, he would have checked the machine,ALTHOUGH,the machine did not know that the rich guy was going to tell you that it already predetermined your fate, THEREFORE, the machine would think I would choose A, knowing most anybody goes for a sure thing. But since you know that he asked the machine , you can determine statistically, that you have a high chance of getting a billion dollars.

 
The machine is presented with exactly the information you have when you make your choice. It knows what the rich man will have told you. After all, the rich man knows what he has planned when he instructs the computer. There is no information given to you that was not available then.
 
----------------------------------------------
 
Jock - There is nothing here that indicates (A) => (B), as you have them stated.
 
(B) is self-contradictory. The choice that optimizes gain is by definition the one that gives more. The question is which choice is it?
 
If the machine can predict your choice with an error rate of less than 1 time out of 100,000, then statistically, choosing box A maximizes your gain. Otherwise choosing box B maximizes it.
 
Nor do the conditions of this dilemma in any way violate causality. The machine predicts your behavior, but does this in the manner in which we regularly (and with great accuracy) currently predict future events all the time: by simulating the future evolution of systems from their current state according to established physical laws. The dilemma merely posits that in the future the structure and behavior of the human brain will be so well understood that it will be possible to predict its behavior to the same sort of accuracy we currently can reach with mechanical systems.
IP Logged

"Pi goes on and on and on ...
And e is just as cursed.
I wonder: Which is larger
When their digits are reversed? " - Anonymous
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #27 on: Jun 13th, 2005, 6:14pm »
Quote Quote Modify Modify

on Dec 23rd, 2002, 2:36pm, Icarus wrote:
I stated right at the start that this is not a puzzle. Really, I was hoping to get some people to take the position Gardner says many people took for him, which was that the logical course would be to open box B. I have even tried to throw out some of their reasoning in hopes someone will run with it, and maybe I can gain a better understanding of their point of view (yes, I personally believe you should take A). Maybe someone will yet.

 
Now that this thread has been bumped, I'll guess I'll comment on it.  
 
The reason for picking box B is based on the fact that the money is either already there or already not there, and your decision isn't going to change it.  Of course, it could still be a bad decision because there might be nothing under it.  That's why I like the version where you can take both boxes better - by the above thinking, it's a no-brainer that you take both boxes.
 
In either version, though, the reasoning that leads you to pick box A isn't quite right.  The problem is, even the act of making a decision presupposes that you have a choice in the matter.  But, if we believe that the computer has flawlessly analyzed our brain well enough to know which box we'll pick, then we don't have a choice.  If the premise of the problem is correct, there's no point in pondering it!
 
So, for example, an A-supporter will tell someone who picked B, "That was foolish - you should have picked A and gotten $10,000!"  What he's telling the other guy is that he should have fooled the computer.  Which, is ironic, since he presumably chose A because he was sure the computer could never be fooled.
 
Going back to SWF's version:  how can the people who made the obviously better choice get less money than those who didn't?  The reason is that the computer discriminated against those people who would make that choice.  So, while there is clearly an advantage to taking both boxes after the money has already been placed, there is also an advantage in the computer believing you wouldn't do that.  There's a similarity between this and the "5 greedy pirates" problem - the answer is very can convince the others that you will act in a odd, and it seems you can do much better if you certain way.  The difference here is that the presumption is the computer cannot be fooled.
IP Logged
TenaliRaman
Uberpuzzler
*****



I am no special. I am only passionately curious.

   


Gender: male
Posts: 1001
Re: Newcomb's Dilemma  
« Reply #28 on: Jun 14th, 2005, 1:11am »
Quote Quote Modify Modify

Also, "amazing accuracy" doest not mean "always correct". This points to us one thing, that the computer is following some sort of logic to predict and is not getting supernatural data in its circuitry to predict our future. Now if the computer follows certain amount of logic, then it is highly likely that it will end up saying "person will take A instead of B" more often than "person will take B instead of A". This gives us a nice opportunity to take B instead of A, which is more likely to have the 1 million dollars.
 
We can consider sort of a variant of this situation.
We have a set of people standing in the queue and waiting to play this little game. After each person has chosen his box and left. This data is fed to the computer which does some sort of correction to its logic. Given that you are the nth person in the queue, which box would you choose?
 
-- AI
IP Logged

Self discovery comes when a man measures himself against an obstacle - Antoine de Saint Exupery
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #29 on: Jun 14th, 2005, 1:20am »
Quote Quote Modify Modify

If you are suggesting that the computer doesn't really know individuals, only the behavior of people in general, then the paradox doesn't seem to appear;  we can do our own investigation into what the computer will likely say (perhaps based on our demographics, or some basic information about us) so we can have a pretty good idea whether box B has the money or not.
 
The only real controversy seems to be in the "omniscient being" version, which defies the basic presumption of free will.
IP Logged
TenaliRaman
Uberpuzzler
*****



I am no special. I am only passionately curious.

   


Gender: male
Posts: 1001
Re: Newcomb's Dilemma  
« Reply #30 on: Jun 14th, 2005, 1:34am »
Quote Quote Modify Modify

on Jun 14th, 2005, 1:20am, Deedlit wrote:
The only real controversy seems to be in the "omniscient being" version, which defies the basic presumption of free will.

Lets consider the omniscient being version then. If this being can see into the future, his past will have predictions which are all correct. Does the question still say, "amazing accuracy"?? If it does, then u can simply replace computers with omniscient being with no change in the logic as to why one can go ahead and choose B.
 
Now lets say  we dont know whether he has been correct in all his predictions so far. Then we are left to choose either A or B and you can follow towr's suggestion of flipping a coin and choose one. Because if we are ready to accept that an omniscient being can exist which can predict our future, then there is no point in discussing free will.
 
-- AI
« Last Edit: Jun 14th, 2005, 1:35am by TenaliRaman » IP Logged

Self discovery comes when a man measures himself against an obstacle - Antoine de Saint Exupery
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #31 on: Jun 14th, 2005, 1:51am »
Quote Quote Modify Modify

I'm not following your logic on why we should choose B.  Surely the omniscient being is aware of your position, so he'll put nothing under box B.
IP Logged
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #32 on: Jun 14th, 2005, 10:01am »
Quote Quote Modify Modify

on Jun 13th, 2005, 4:51pm, Icarus wrote:

Jock - There is nothing here that indicates (A) => (B), as you have them stated.

 
I had the original version of Newcomb's Dilemma in mind for which certainly (A) => (B) : if a physical device can be constructed that is capable of predicting all my actions, then the choice that optimises my gain (i.e. selecting both boxes) will give me less than a sub-optimal choice (restricting myself to box B only).
 
on Jun 13th, 2005, 4:51pm, Icarus wrote:

(B) is self-contradictory. The choice that optimizes gain is by definition the one that gives more.

 
Of course! (B) is false, and therefore (A) is false. That is exactly the point I wanted to make.
 
Again, you have to start from the original version of the 'paradox' for which the choice that optimises your gain is to grab what is in both boxes, and not to restrict yourself to one box.
 
If you want, you can reformulate (B) as: emptying the two boxes gives you less than emptying the box labelled 'B'. (Obviously, still a false statement. )
 
In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.
« Last Edit: Jun 14th, 2005, 10:13am by JocK » IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #33 on: Jun 14th, 2005, 10:58am »
Quote Quote Modify Modify

on Jun 14th, 2005, 10:01am, JocK wrote:
In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.
It is? When, why, where?
 
I really don't see why it should be a logical impossibility.
« Last Edit: Jun 14th, 2005, 10:59am by towr » IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #34 on: Jun 14th, 2005, 11:58am »
Quote Quote Modify Modify

on Jun 14th, 2005, 10:58am, towr wrote:

It is? When, why, where?

 
Yes! One day ago, because of a Reductio ad absurdum, here on this forum...!  Grin
 
 
IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #35 on: Jun 14th, 2005, 12:15pm »
Quote Quote Modify Modify

on Jun 13th, 2005, 4:51pm, Icarus wrote:

Nor do the conditions of this dilemma in any way violate causality. The machine predicts your behavior, but does this in the manner in which we regularly (and with great accuracy) currently predict future events all the time: by simulating the future evolution of systems from their current state according to established physical laws.

 
Ok, now the philosophical bit...
 
Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)  
 
Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.
 
I guess that when you speak of  "free will" and I speak of "causality", we basically mean one and the same thing.
 
Hmmm.... have the feeling that this will not be the last post in this thread...  Smiley
IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #36 on: Jun 14th, 2005, 3:03pm »
Quote Quote Modify Modify

on Jun 14th, 2005, 11:58am, JocK wrote:
because of a Reductio ad absurdum
I disagree that it reduces to something absurd.
If you were a computer there would be no problem with a bigger better faster computer predicting exactly what you'd do (if anything. There's always the halting problem, for example. But since the simulating computer is faster, it will know your decision before you, if there ever is one).
I don't find it logically inconsistent to consider people might be biological computers.
« Last Edit: Jun 14th, 2005, 3:05pm by towr » IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #37 on: Jun 14th, 2005, 3:14pm »
Quote Quote Modify Modify

on Jun 14th, 2005, 12:15pm, JocK wrote:
Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)
I would think causality is that one thing necessarily leads to another.
Someone's will could cause something.
 
Quote:
Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.
Why?
I mean, aside from the fact that there is no meaning  without (free) will or soul or something else that provides meaning.
IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #38 on: Jun 14th, 2005, 3:24pm »
Quote Quote Modify Modify

on Jun 14th, 2005, 3:03pm, towr wrote:

I disagree that it reduces to something absurd.

 
Are you serious? Isn't it obviously absurd when someone claims that two boxes can be prepared such that when you are given the choice of taking either
 
1) the contents of box 1, or
 
2) the contents of both boxes,  
 
that you will end up with less if you grab the contents of both boxes?
 
on Jun 14th, 2005, 3:03pm, towr wrote:

If you were a computer there would be no problem with a bigger better faster computer predicting exactly what you'd do  

 
That remark starts with a very big IF...
IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #39 on: Jun 14th, 2005, 4:11pm »
Quote Quote Modify Modify

on Jun 14th, 2005, 10:01am, JocK wrote:

Of course! (B) is false, and therefore (A) is false. That is exactly the point I wanted to make.
 
Again, you have to start from the original version of the 'paradox' for which the choice that optimises your gain is to grab what is in both boxes, and not to restrict yourself to one box.
 
If you want, you can reformulate (B) as: emptying the two boxes gives you less than emptying the box labelled 'B'. (Obviously, still a false statement. )
 
In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.

 
One should always be careful about proving things about the physical world by logical arguments.
 
You've made (B) sound false by oversimplifying the situation.  The argument for taking both boxes is that, once the money is there, there's no reason not to take both boxes.  The counter argument says nothing to contradict that;  it's based on the notion that, since the computer is able to read our actions 100% of the time, making the decision to take only one box will cause the computer to put more money in the first box.  The argument is often phrased to hide this strange-sounding causation - something like "With one decision you get $10,000, with the other you get a million, what's the problem?!!", but it's there nevertheless.
 
If you've ever seen "The Missing Link",  it's a game show in which contestants get to vote each other off between rounds, even though they cooperate during them to get the most money.  So, if you try to maximize the prize, that may cause your opponents to vote you off, and you end up with nothing.  Similarly, being the type of person who takes both boxes causes the computer to screw you, even though you're just making the rational decision.
 
It's true that there is a deep question in whether or not our actions are decided in advance.  If our brains function by electrical impulses that follow more or less classical laws of physics (i.e. it's too macroscopic for the Heisenberg uncertainty principle to factor into it), then all our actions are predetermined.  But then we have no real choices, which of course is completely antithetical to our entire mental process.
 
But there's no simple reductio ad absurdum like you describe.
IP Logged
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #40 on: Jun 14th, 2005, 4:15pm »
Quote Quote Modify Modify

on Jun 14th, 2005, 12:15pm, JocK wrote:

 
Ok, now the philosophical bit...
 
Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)  
 
Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.
 
I guess that when you speak of  "free will" and I speak of "causality", we basically mean one and the same thing.
 

 
Sorry, but this is pure sophistry.  You could just as well say:
 
Without such a thing as a "flat earth", the concept of a non-flat earth is meaningless.
 
I guess that when you speak of "flat earth" and I speak of "non-flat earth", we basically mean the same thing.
 
IP Logged
rmsgrey
Uberpuzzler
*****





134688278 134688278   rmsgrey   rmsgrey


Gender: male
Posts: 2873
Re: Newcomb's Dilemma  
« Reply #41 on: Jun 15th, 2005, 8:36am »
Quote Quote Modify Modify

1) As others have pointed out, "taking both boxes getting you less than just taking one" is not paradoxical if your decision affects the contents of the boxes.
JocK's argument boils down to:
"Your choice cannot influence the contents of the boxes, therefore a situation that requires your choice to influence the contents of the boxes cannot arise, therefore, the hypothetical situation where your choice influences the contents of the boxes is impossible."
This is known as circular reasoning, or begging the question...
 
2) Free Will in the presence of omniscience is a tricky subject at best (there are philosophers that prefer to sidestep the entire issue by defining free will as the result of our normal decision making process, and not worry about whether or not that result is predetermined). On the other hand, total omniscience isn't required for the apparent paradox - merely limited omniscience good enough to predict your choice of boxes. Yes, the existence of such a prediction may limit our free will, but there are a number of other predictions that can be made with near 100% certainty, without threatening to overturn causality or disturb the assumption of free will - for instance, I predict that, within the next 24 hours from my writing this, no-one from this forum will stand on the moon, walk through a solid brick wall, or fly unaided. Being unable to choose a different box is no more of a threat to free will than being unable to walk through walls.
 
3) I'm sure everyone can agree that there's absolutely no paradox if, instead of the boxes being filled (based on your choice) before you choose, the boxes are filled (based on your choice) after you choose. The problem only comes from the order of events. However, from your viewpoint, the only difference between the two is that in one you're required to accept the existence of a perfect prediction - if the setup was that you were given the choice, told that your choice was used to determine the contents of the boxes, and then the boxes were produced after you'd chosen, I think everyone would agree that you make the choice not to try and fool the system of your own free will, regardless of the fact that the boxes have actually been sitting sealed backstage for the last 6 months since the machine predicted your choice (the fact taht,in this case, "the machine" consists of a single slip of paper with "Put a million in box B" scrawled on it is irrelevant). So what about being told your choice has been predicted makes the situation a threat to your free will? Now if you were told what the prediction was, then you'd have a limitation on your free will (or the machine wouldn't work)
IP Logged
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #42 on: Jun 15th, 2005, 10:48am »
Quote Quote Modify Modify

on Jun 14th, 2005, 4:11pm, Deedlit wrote:

 
You've made (B) sound false by oversimplifying the situation.  The argument for taking both boxes is that, once the money is there, there's no reason not to take both boxes.  The counter argument says nothing to contradict that;  it's based on the notion that, since the computer is able to read our actions 100% of the time, making the decision to take only one box will cause the computer to put more money in the first box.  

 
You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.
 
Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!
 
on Jun 14th, 2005, 4:11pm, Deedlit wrote:

If you've ever seen "The Missing Link",  it's a game show in which contestants get to vote each other off between rounds, even though they cooperate during them to get the most money.  So, if you try to maximize the prize, that may cause your opponents to vote you off, and you end up with nothing.  Similarly, being the type of person who takes both boxes causes the computer to screw you, even though you're just making the rational decision.

 
I have seen "The Missing Link", but never watched the a-causal version....
 
on Jun 14th, 2005, 4:11pm, Deedlit wrote:

 
If our brains function by electrical impulses that follow more or less classical laws of physics (i.e. it's too macroscopic for the Heisenberg uncertainty principle to factor into it), then all our actions are predetermined.  

 
A big IF, accompanied by a 'more or less' that requires definition.  
 
One spanner (out of the many) that can be thrown in: at times when integrated-circuit designers start worrying about quantum mechanical effects, surely a claim that quantum-mechanical effects play no role whatsoever in the functioning of a human brain would be naive.  
 
 
IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #43 on: Jun 15th, 2005, 10:56am »
Quote Quote Modify Modify

on Jun 14th, 2005, 4:15pm, Deedlit wrote:

 
Sorry, but this is pure sophistry.  You could just as well say:
 
Without such a thing as a "flat earth", the concept of a non-flat earth is meaningless.
 
I guess that when you speak of "flat earth" and I speak of "non-flat earth", we basically mean the same thing.
 

 
Ok, no problem... you are allowed to write down this nonsense. (Having no free will whatsoever.... ) 
 
Tongue
 
« Last Edit: Jun 15th, 2005, 10:59am by JocK » IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #44 on: Jun 15th, 2005, 11:43am »
Quote Quote Modify Modify

Clearly so far I haven't convinced any of you, but let me try to make one (last!) attempt:
 
You are in a big television studio facing two boxes. You can not see the contents of the boxes, but the public - watching from the side - can. You are told by the quizzmaster that one of them might contain a million dollars. You have a choice between taking the contents of both boxes, or alternatively the contents of one of the boxes. What choice do you make?  
 
Wait a second, there is a snag to it: you are also told that the boxes were filled a year before based on a computer prediction of what you would do. If you were predicted to grab both boxes, no money was put in either of them. If you were predicted to take one of the boxes, a million dollors was put in that very box.
 
So again: what choice do you make?
 
OK, you have always been a modest person, and also this time you decide to go for one box. However, just as you are about to say "I would just like to have the contents of the left box please", a cosmic particle enters the studio and hits your brain, triggering a chain of events leading to the words "I would like to have the contents of both boxes please!" leaving your mouth.
 
What happens? Will the audience in the studio see a million dollars evaporate from the left box? Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead...
 
 
 
And finally, please answer honestly: would any of you under the given circumstance select only one box?
 
 
 
IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #45 on: Jun 15th, 2005, 12:51pm »
Quote Quote Modify Modify

on Jun 15th, 2005, 10:48am, JocK wrote:
You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.
Fortunately that decision is predestined.
 
Quote:
Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!
The phrasing might be a little inaccurate. But the same things that will inevitably cause you to reach your decisions exist before the computer makes it's prediction, and aside from causing your decision cause the computer to fill the boxes appropriately.
 
Quote:
A big IF
Considering it's presupposed in the problem, it's hardly matters whether it is actually the case, just that it is contingent.
IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #46 on: Jun 15th, 2005, 1:05pm »
Quote Quote Modify Modify

on Jun 15th, 2005, 11:43am, JocK wrote:
What happens? Will the audience in the studio see a million dollars evaporate from the left box?
Of course not, no more so than in the original problem.
If the prediction was you taking both boxes, then there would never heve been a million to 'evaporate' in the first place.
 
Quote:
Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead...
It only needs to predict a summary of your state. You could take issue with it's error rate. But if the presupposition is that the prediction is correct, by whatever means, then that's not an issue.
 
Quote:
And finally, please answer honestly: would any of you under the given circumstance select only one box?
Depends on how much faith I have in the prediction. Or in fact what I'd predict the prediction to be.
 
Naturally once the boxes are set up, then chosing one option or the other doesn't change what's inside them.  
However, if I'm always inclined to take just one (and the computer knows this), then the computer would predict I take only one and in fact I'd take only one.
And if I'm always inclined to choose both, they'd be empty.
 
More problemetic is that it doesn't really matter whether the computer correctly predict your behaviour. If it simply always predicts you'd take both, you can never win anything in this case.
« Last Edit: Jun 15th, 2005, 1:08pm by towr » IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
JocK
Uberpuzzler
*****






   


Gender: male
Posts: 877
Re: Newcomb's Dilemma  
« Reply #47 on: Jun 15th, 2005, 2:58pm »
Quote Quote Modify Modify

Quote JocK: "Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead... "  
 
Quote Towr: "It only needs to predict a summary of your state. You could take issue with it's error rate. But if the presupposition is that the prediction is correct, by whatever means, then that's not an issue."

 
 
OK, so we agree that this would lead to the conclusion that the computer is capable of predicting the whole universe including its own behaviour? (Remember: the physical universe is a K-system with a strongly mixing phase space.)
 
Well then... what I didn't tell you is that more than a year ago I constructed an exact copy of that computer. I used this copy (let's call it "comp B") to predict what the original computer ("comp A") would predict I would do with the boxes.  
 
And before I used comp B to predict comp A's behaviour, I have made up my mind:
 
- If comp A would predict I would open both boxes, I will choose only one box (let's say the leftmost box).  
 
- If, however, comp A would predict I would open only one of the boxes, I will open both boxes.
 
Now what prediction will the comp A make?  
 
Indeed: just like "a barber who shaves all men who don't shave themselves, and no-one else" can not exist, in the same way a computer that can predict human behaviour can not exist.
« Last Edit: Jun 15th, 2005, 3:03pm by JocK » IP Logged

solving abstract problems is like sex: it may occasionally have some practical use, but that is not why we do it.

xy - y = x5 - y4 - y3 = 20; x>0, y>0.
towr
wu::riddles Moderator
Uberpuzzler
*****



Some people are average, some are just mean.

   


Gender: male
Posts: 13730
Re: Newcomb's Dilemma  
« Reply #48 on: Jun 15th, 2005, 3:29pm »
Quote Quote Modify Modify

No, the conclusion should be that no two such machines can exist without contradicting the premise that they can make a prediction about your behaviour if you have access to one of them. Not that one can't exist period.
 
And I still disagree the computer would have to be able to predict the whole universe. You're not that complicated. If just one or a handfull of particles from space hitting your brain would change your behaviour into the opposite, then you'd be a lot more wishy-washy. (Although that's going far too far into the physical for a thought experiment anyway. )
IP Logged

Wikipedia, Google, Mathworld, Integer sequence DB
Deedlit
Senior Riddler
****





   


Posts: 476
Re: Newcomb's Dilemma  
« Reply #49 on: Jun 15th, 2005, 8:09pm »
Quote Quote Modify Modify

on Jun 15th, 2005, 10:48am, JocK wrote:

 
You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.
 
Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!
 

 
Ah, but I was talking about your argument.  You claimed there was a contradiction, based on the following:
 
a)  Picking both boxes causes the player to end up with more money than if he just picked one.
 
b) Picking one box causes the player to end up with more money than if he picked both.
 
Now, how do you justify b?  You can try to explain in all kinds of ways, but at the root there has to be an implication "picking one box" -> "there's a million dollars in that box".  If you completely deny that connection, then there's no reason in the world not to pick both boxes.
 
From your line of thinking above - that the money is already there, and our choice has no effect on it - I can't imagine why you would hesitate in your choice.  Just take both.
 
Quote:
I have seen "The Missing Link", but never watched the a-causal version....

 
That would be pretty interesting.  "Aha, I see you backstabbed me in the future, but I'll beat you to the punch!"
 
Quote:

One spanner (out of the many) that can be thrown in: at times when integrated-circuit designers start worrying about quantum mechanical effects, surely a claim that quantum-mechanical effects play no role whatsoever in the functioning of a human brain would be naive.  

 
Perhaps.  But let me clarify what I meant by "more or less".  According to quantum mechanics, particles are limited in how accurately their position and velocity can be measured;  this has some eerie consequences about the real world.  For example, if we are standing next to a wall, with positive probability we should suddenly jump to the other side of the wall.  However, the probability of this occuring is so low that, for all practical purposes, we can presume that we'll stay on the same side of the wall.
 
Perhaps the same is true with regard to our brains and their decision making.  Yes, there is some fundamental quantum uncertainty involved;  but perhaps, like our bodies jumping through the wall, the amount of deviation required to change a decision causes the probability to be negligible - i.e. it probably won't happen even once in our entire lives.
 
In any case, it seems that quantum indeterminacy doesn't really settle the 'free will' vs. 'determinism' issue.  Which I believe is basically unresolvable, since you can't "play reality twice" and see if the same things happen again.  (And yes, this last statement contains some ambiguities, which is precisely the problem.)
IP Logged
Pages: 1 2 3 4  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board