wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi)
riddles >> hard >> Newcomb's Dilemma
(Message started by: Icarus on Dec 21st, 2002, 7:37pm)

Title: Newcomb's Dilemma
Post by Icarus on Dec 21st, 2002, 7:37pm
This is not really a puzzle, but it is an interesting question. Once again it comes from the pages of Martin Gardner. The name refers to William A. Newcomb, a physicist at Lawrence Livermore Laboratory, who came up with it in 1960.

A great advance has occured in the field of artificial intelligence. A computer has been programmed to predict your actions, and has proven itself able to do so with amazing accuracy. You have put it through many difficult and devious tests, yet it has always been correct in its predictions. A certain multibillionaire with a twisted sense of humor has decided to reward you for your great discovery. He take you into a room with two boxes and tells you that box A contains $10,000 while box B either contains nothing, or a certified check for 1 billion dollars. You will be allowed to open exactly one box and keep it contents. The catch is this: Earlier, your program was asked what you would do. If the computer predicted you would open box A, then the 1 billion dollar check was placed in box B. If the computer predicted you would open box B, the box will be empty. Which box should you open?

Note that box B was prepared before you enter the room.

Title: Re: Newcomb's Dilemma
Post by Jeremiah Smith on Dec 21st, 2002, 8:13pm
Well, I would take A... if the computer knows I'll take A, it'll stick the big money in B, but I still get ten grand from A.  If I decide to have B, though, it sticks nothing in B, and so I get nothing. So, I can either choose between $10,000 or nothing.

Hmmm...beforehand, do you know about what the computer will do in each situation, or are you oblivious to the fact that what's in B is determined by the computer's predictions?

Title: Re: Newcomb's Dilemma
Post by Icarus on Dec 21st, 2002, 9:15pm
You are told about the prediction and its consequences. But you are not told what the prediction was.

Title: Re: Newcomb's Dilemma
Post by Jeremiah Smith on Dec 22nd, 2002, 3:06am
So then I'd take A :)

I think part of the reason this riddle isn't so easy, is that people will usually be greedy and try to find some way to get the billion dollars, instead of realizing that they should just try to get as much as possible.

Title: Re: Newcomb's Dilemma
Post by towr on Dec 22nd, 2002, 7:19am
flip a coin..
Let randomness guide 'your' action.
50% chance it'll have you take A, in which case uyou get 10.000. 50% chance it'll have you take B, which can have been predicted by the AI program in 50% of the time, so expectancy of half a billion dollar. So a total expectancy of just short of a quarter of a billion dollars..
Good odds considering the game didn't cost you anything..

Title: Re: Newcomb's Dilemma
Post by Icarus on Dec 22nd, 2002, 3:30pm
Good Idea, towr. But just to be onery, if the computer predicts you will give yourself any chance of opening box B, the box will be empty.

The point of this question is about free will. If the computer predicts you will open A, then when you are in the room, B has the 1 billion. Why not just go ahead and take it? Why settle for the 10,000 when the 1 billion is at hand?

Title: Re: Newcomb's Dilemma
Post by fenomas on Dec 22nd, 2002, 7:48pm
I think I'd walk into the room, switch the labels on the boxes, and open what is now labeled as Box A.

Title: Re: Newcomb's Dilemma
Post by towr on Dec 23rd, 2002, 1:08am
Ah yes.. free will..
The illusion people have about not being predictable.. Actually, being predictable and free will needn't have to do anything with each other.. It can be your own choice to be predictable.

In the end, here, it depends on how to AI program can figure out your decisions.. If it's clearvoyant you can't do very much against it, but chances are it isn't. So it could use rules and statistics.
The problem with rules in this case is: take A, so the billion is in B, so take B, so it's empty, thus take A, etc
If that's how the program reacts it's 50-50 chance. But under the new by-law, any chance of taking B means its empty, so take A. So why would the computer predict any chance for B. In the end the AI is likely to go insane..
Of course the may be rules to guard against going into an infinite loop, just like people have (you get tired of looping eventually).

An interesting question might be, does the AI think like you. If it does, which is likely since it needs to predict your choices exactly, it may be on your side, and say A so you can take B. Of course it'd need to know the situation. But to predict the situation it will think it is you, and some AI program is predicting what it'll do. So you get into another loop, probably ending in 50-50 chance. Which would mean B is empty, so it chooses A, so you should choose B, which it will thus be empty..

I think since any good AI program would always leave the possibility open you'll choose B, it will be empty. Even if you know it has to be empty there's still the chance you pick it cause you're not sure..

Title: Re: Newcomb's Dilemma
Post by fenomas on Dec 23rd, 2002, 6:53am
Towr, I think you're talking yourself in circles... Which is clearly what this poser was designed for (I'm assuming this isn't really a "puzzle", with an answer, but something to get us thinking).

In the end, with the "any chance of taking B" addendum, you really can't get the billion dollars without contradicting the setup of the puzzle, which is that the computer is "amazingly accurate", which is to say that it will presumably be correct. It doesn't matter how it thinks- it's a black box as far as the puzzle is concerned. Even cute answers, like switching the labels, or getting someone to hypnotize you into thinking "A" is "B" before you go in don't really count, because the computer should see that too. I don't think the billion is gettable.

So I'm going to go with: you walk into the room and say, "What is love?" or some such deep question, then when the computer gets confused and explodes (a la classic Star Trek- that western episode), you take the money.

Title: Re: Newcomb's Dilemma
Post by Icarus on Dec 23rd, 2002, 2:36pm
I stated right at the start that this is not a puzzle. Really, I was hoping to get some people to take the position Gardner says many people took for him, which was that the logical course would be to open box B. I have even tried to throw out some of their reasoning in hopes someone will run with it, and maybe I can gain a better understanding of their point of view (yes, I personally believe you should take A). Maybe someone will yet.

In the dilemma as stated by Martin Gardner, the predictor was an omniscient being, not a computer. But it seemed to me that a lot of the responses he mentioned were nothing more than atheists getting upset about the suggestion that such a being could exist. By making the predictor a computer program, I was hoping to avoid arguments based on religious fervor rather than rational reasoning.

Title: Re: Newcomb's Dilemma
Post by SWF on Dec 23rd, 2002, 7:05pm
Perhaps everyone is claiming to take Box A to fool the computer, which presumably has access to this riddle forum, should this situation ever arise.

Suppose this is a TV show and before making your choice a camera inside Box B shows millions of viewers that Box B has $1 billion inside.  If you pick B, will the money disappear in front of everyone's eyes?  The computer isn't that powerful.  All it can do it make predictions  At the time of your choice, the money is either already in the box or it is not, and the computer can't change it.  Everyone here is saying they will take Box A.  Don't they wish they had taken Box B?  Remember that every time after Box A was chosen, it turns out there was $1 billion in Box B all along.  All those TV viewers must think you are nuts for picking Box A.

This version of the paradox is slightly different from the usual version.  In the usual description, the choice is between taking just box B (which the computer stocked with a $1 million if it predicted this choice, otherwise empty) or taking the contents of both boxes (box A always contains $1000).  So in Icaraus' description you get rich if you successfully fool the predictor, but in the usual version you get rich if you match the prediction.  However, there is the temptation to take both boxes anyway since the contents of both boxes is always $1000 more than just box B.

Title: Re: Newcomb's Dilemma
Post by lukes new shoes on Dec 29th, 2002, 4:31am
does the computer know its predection is going to be used to determine the contents of box B?

Title: Re: Newcomb's Dilemma
Post by Jeremiah Smith on Dec 29th, 2002, 8:37am

on 12/23/02 at 19:05:09, SWF wrote:
Everyone here is saying they will take Box A.  Don't they wish they had taken Box B?  Remember that every time after Box A was chosen, it turns out there was $1 billion in Box B all along.  All those TV viewers must think you are nuts for picking Box A.


Yes, but theoretically, the computer or the deity would analyze your brain wave patterns and thought processes and think "Hmmm...he's gonna think about taking A, so the billion will be in B, and then he's gonna switch and take B at the last second."

I think a lot of the paradoxes seem to stem from thinking the predictor isn't omniscient regarding what you'll do.

Title: Re: Newcomb's Dilemma
Post by BNC on Dec 29th, 2002, 11:56pm
The way I read the dilemma, you cannot fool the computer / deity (let assume that is the sci-fi manner, the AI has a time machine in which it observes your future actions). Given that, I totally agree with the previous posts – take box A.


note to self: when designing my omniscient AI machine, leave a loophole for allowing me to select the 1 billion.. ;)

BNC

Title: Re: Newcomb's Dilemma
Post by James Fingas on Jan 2nd, 2003, 9:57am
BNC,

That's exactly it! You can never win picking box B, because if you were to pick box B, then the computer would have predicted it. Even with this weird prediction thing, it's still cause-and-effect:

1) You decide to pick box B
2) The computer realizes (before the fact, but whatever!) that you'll pick box B
3) There is nothing in box B.

Therefore, you get $0 if you pick box B, and $10 000 if you pick box A. The fact that $1 billion is in box B is not helpful to you--the only reason it's there is because you're taking the $10 000.

Title: Re: Newcomb's Dilemma
Post by SWF on Jan 3rd, 2003, 6:37am

on 12/29/02 at 08:37:09, Jeremiah Smith wrote:
I think a lot of the paradoxes seem to stem from thinking the predictor isn't omniscient regarding what you'll do.


Yes, the arguments for taking box A are jumping to the conclusion that the computer is omniscient, even though the question only states that it has never been wrong in many tests.  I guess it depends on how easily one is convinced that the program will always be right in the future.  If instead, one cent was in box A, would anyone change his mind to box B?  If so then, isn't that admitting that there is still some possibility that the computer will fail?  Even $10,000 vs. $1 billion is a factor of 100,000 difference, so I guess a streak of "many" convinces some people that there is a least 100,000 to one chance of the streak being broken in a given test.

Title: Re: Newcomb's Dilemma
Post by BNC on Jan 4th, 2003, 1:59am

on 01/03/03 at 06:37:02, SWF wrote:
Yes, the arguments for taking box A are jumping to the conclusion that the computer is omniscient, even though the question only states that it has never been wrong in many tests.   <snip>


That's an interesting point. As I posted before, I read this dilemma as if the computer was omniscient (like in other posted versions of the dilemma). But all this version of the dilemma stated was that the computer had an amazing accuracy.

I guess that you may assign a value to the probability of "the AI being right" P, were P<1. In that case, the expected value of box B would be (1-P)x10^9. So a good choise of box B would be if (1-P) > 10^5.

But then it only gets more complicated. I think that in such cases, it would be safe to assume the P is not constant for all predicted cases. E.g., P would be larger for the question "what will you have for breakfast?" (happend a lot of times in the past) than it would for the question "would you save your worst enemy from being eaten by a magical, soul-eating, dragon?" (probably didn't hapen more than once or twice before...)

If that is indeed the case, we will need to estimate P for the question in hand. Now, let's assume that whoever designed an "almost omniscient" AI program, has a good understanding of math and logic. In fact, we may even assume he / she is a member of this very board. So P wouldbe calculated, not for the first time question "what box do you pick?", but rather for the more general question "how do you choose based on statistics?". The program would assume you to do the math!

So P would be dependant on P. The larger P is, the larger it would be. That a positive feedback in action. P will reach 1 very rapidlly. Thus, (1-P)x10^9 -> 0.

In conclusion, I would still pick box A.

Title: Re: Newcomb's Dilemma
Post by mook on Jan 4th, 2003, 10:24am
After I ask the computer which box he predicted me to open, I open the other box.  Why should I think so hard when I have discovered this great artificial intelligence?

Title: Re: Newcomb's Dilemma
Post by Jeremiah Smith on Jan 4th, 2003, 12:53pm

on 01/04/03 at 10:24:51, mook wrote:
After I ask the computer which box he predicted me to open, I open the other box.  Why should I think so hard when I have discovered this great artificial intelligence?


Because the AI doesn't tell you what box it predicted you'd open.

Title: Re: Newcomb's Dilemma
Post by Kozo Morimoto on Jan 4th, 2003, 6:22pm

on 01/03/03 at 06:37:02, SWF wrote:
If so then, isn't that admitting that there is still some possibility that the computer will fail?  Even $10,000 vs. $1 billion is a factor of 100,000 difference, so I guess a streak of "many" convinces some people that there is a least 100,000 to one chance of the streak being broken in a given test.


This raises an interesting investment philosophy: would you take the $10k (guaranteed) or pick the box with 1 in 100,000 chance of getting $1b and 99,999 in 100,000 chance of getting $0?  (sort of like Who Wants to be a Millionaire when you can guarantee a walk-away money, or pick 50-50 for a chance to double your walk-away money)

Both choices give you and E(payout) of $10k, however the latter box has a higher standard deviation or in investment terms risk.  Current theory states that give the same expected return, pick the investment with the lowewr risk.

Title: Re: Newcomb's Dilemma
Post by mook on Jan 4th, 2003, 8:33pm

on 01/04/03 at 12:53:18, Jeremiah Smith wrote:
Because the AI doesn't tell you what box it predicted you'd open.


But the riddle doesn't tell you that you can't ask.

Title: Re: Newcomb's Dilemma
Post by SWF on Jan 4th, 2003, 8:45pm
I agree with mook, you can ask and if the computer answers it is setting itself up to fail or be a liar.  I was thinking in terms of building an exact copy of the computer, and after the contents of Box B have been set, ask it what is inside Box B.  If it says the Box B is empty, pick Box A.  If it says it contains a billion dollars, pick Box B.  One of the two infallible computers must have given an incorrect answer.  I wonder if this was one of the devious tests mentioned in the question for which the computer was successful.

Title: Re: Newcomb's Dilemma
Post by Icarus on Jan 5th, 2003, 1:45pm
Been away for awhile... You can examine this problem from both the point of view that the computer can be wrong on occasions, or from the point of view that it is always correct. Obviously, if it is occasionally inaccurate, your best expected return may come from opening box B, depending on how accurate the computer is.

So consider the computer to be always accurate. Asking the computer (or a copy of it) ahead of time what box it will predict runs afoul of this logical conundrum: You can either know exactly what your future will be, or have freedom to decide what you will do, but not both. If the computer is infallible, it will not answer your question. Neither the original nor any copy.


Quote:
This version of the paradox is slightly different from the usual version.  In the usual description, the choice is between taking just box B (which the computer stocked with a $1 million if it predicted this choice, otherwise empty) or taking the contents of both boxes (box A always contains $1000).  


Yes, I tried to write it out from memory and didn't realize that I had changed it. Maybe that is why no one is arguing the position equivalent to the one some of Gardner's readers took for that version: the logical action is to take both boxes even though that would give you less money. These people were not trying to outsmart the predictor, they were actually somewhat upset that those making the "right" choice would get less money than the others. This is the point of view I was hoping someone would take so that I could come to understand it in discussions. (I doubt I would ever agree with it, but I would like to know what the thought behind it is.) I may have blown it though by changing the problem. :(

Title: Re: Newcomb's Dilemma
Post by rmsgrey on Apr 24th, 2003, 10:53am
The problem assumes some form of determinism - therefore there is no reason to assume my choice to be logical. For that matter, what happens when I take a box with a button on it into the computer's consultation chamber and ask whether I will press it or not in the ten seconds after I hear the machine's answer, having resolved to do the opposite? For the machine to always be accurate, something must happen to frustrate my intentions - maybe I pass out or get distracted immediately after hearing the answer that I won't so don't, or maybe I accidentally press it somehow before the time limit having heard I will. Either way, as soon as the computer makes its decision (if not sooner) I lose free will. In which case, the question of which box I should open is irrelevant. Of course, if I genuinely possess any free will in the matter, then it seems that it should be possible to fool the computer (maybe by using a perfect random number generator :) - though the upgraded version means the computer is only fooled when you pick box A and B is empty :()

Anyway, in the Gardner version, the reason people take both boxes (apart from lacking any choice) is that they don't really accept the time-reversed causation. If you require consistency in all time-like loops (vital for infallibility of machine) then the contents of box B are sort of like Schroedinger's Cat - making your final decision collapses the waveform so that it's "always" been that way - the audience always saw the contents of box B that way, but until you gave your final answer, the audience itself was in a superposition...

Title: Re: Newcomb's Dilemma
Post by andrewc32569 on Jun 11th, 2005, 7:44pm
Here is what I think.

The obvious choice is A, right? Wonrg.


I would choose B BECAUSE The machine would most likely predict me to choose A.


Here is the reason:Before you meet up with the rich guy, he would have checked the machine,ALTHOUGH,the machine did not know that the rich guy was going to tell you that it already predetermined your fate, THEREFORE, the machine would think I would choose A, knowing most anybody goes for a sure thing. But since you know that he asked the machine , you can determine statistically, that you have a high chance of getting a billion dollars.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 12th, 2005, 1:36am
This 'dilemma' just constitutes yet another proof that the laws of physics must follow causality:

(A) = a physical device can be constructed that is capable of predicting all my actions

(B) = the choice that optimises my gain will give me less

if (A) then (B)
(B) = false
then (A) = false


I really don't see a paradox nor a dilemma here.


Title: Re: Newcomb's Dilemma
Post by Icarus on Jun 13th, 2005, 4:51pm

on 06/11/05 at 19:44:46, andrewc32569 wrote:
I would choose B BECAUSE The machine would most likely predict me to choose A.


Here is the reason:Before you meet up with the rich guy, he would have checked the machine,ALTHOUGH,the machine did not know that the rich guy was going to tell you that it already predetermined your fate, THEREFORE, the machine would think I would choose A, knowing most anybody goes for a sure thing. But since you know that he asked the machine , you can determine statistically, that you have a high chance of getting a billion dollars.


The machine is presented with exactly the information you have when you make your choice. It knows what the rich man will have told you. After all, the rich man knows what he has planned when he instructs the computer. There is no information given to you that was not available then.

----------------------------------------------

Jock - There is nothing here that indicates (A) => (B), as you have them stated.

(B) is self-contradictory. The choice that optimizes gain is by definition the one that gives more. The question is which choice is it?

If the machine can predict your choice with an error rate of less than 1 time out of 100,000, then statistically, choosing box A maximizes your gain. Otherwise choosing box B maximizes it.

Nor do the conditions of this dilemma in any way violate causality. The machine predicts your behavior, but does this in the manner in which we regularly (and with great accuracy) currently predict future events all the time: by simulating the future evolution of systems from their current state according to established physical laws. The dilemma merely posits that in the future the structure and behavior of the human brain will be so well understood that it will be possible to predict its behavior to the same sort of accuracy we currently can reach with mechanical systems.

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 13th, 2005, 6:14pm

on 12/23/02 at 14:36:08, Icarus wrote:
I stated right at the start that this is not a puzzle. Really, I was hoping to get some people to take the position Gardner says many people took for him, which was that the logical course would be to open box B. I have even tried to throw out some of their reasoning in hopes someone will run with it, and maybe I can gain a better understanding of their point of view (yes, I personally believe you should take A). Maybe someone will yet.


Now that this thread has been bumped, I'll guess I'll comment on it.  

The reason for picking box B is based on the fact that the money is either already there or already not there, and your decision isn't going to change it.  Of course, it could still be a bad decision because there might be nothing under it.  That's why I like the version where you can take both boxes better - by the above thinking, it's a no-brainer that you take both boxes.

In either version, though, the reasoning that leads you to pick box A isn't quite right.  The problem is, even the act of making a decision presupposes that you have a choice in the matter.  But, if we believe that the computer has flawlessly analyzed our brain well enough to know which box we'll pick, then we don't have a choice.  If the premise of the problem is correct, there's no point in pondering it!

So, for example, an A-supporter will tell someone who picked B, "That was foolish - you should have picked A and gotten $10,000!"  What he's telling the other guy is that he should have fooled the computer.  Which, is ironic, since he presumably chose A because he was sure the computer could never be fooled.

Going back to SWF's version:  how can the people who made the obviously better choice get less money than those who didn't?  The reason is that the computer discriminated against those people who would make that choice.  So, while there is clearly an advantage to taking both boxes after the money has already been placed, there is also an advantage in the computer believing you wouldn't do that.  There's a similarity between this and the "5 greedy pirates" problem - the answer is very can convince the others that you will act in a odd, and it seems you can do much better if you certain way.  The difference here is that the presumption is the computer cannot be fooled.

Title: Re: Newcomb's Dilemma
Post by TenaliRaman on Jun 14th, 2005, 1:11am
Also, "amazing accuracy" doest not mean "always correct". This points to us one thing, that the computer is following some sort of logic to predict and is not getting supernatural data in its circuitry to predict our future. Now if the computer follows certain amount of logic, then it is highly likely that it will end up saying "person will take A instead of B" more often than "person will take B instead of A". This gives us a nice opportunity to take B instead of A, which is more likely to have the 1 million dollars.

We can consider sort of a variant of this situation.
We have a set of people standing in the queue and waiting to play this little game. After each person has chosen his box and left. This data is fed to the computer which does some sort of correction to its logic. Given that you are the nth person in the queue, which box would you choose?

-- AI

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 14th, 2005, 1:20am
If you are suggesting that the computer doesn't really know individuals, only the behavior of people in general, then the paradox doesn't seem to appear;  we can do our own investigation into what the computer will likely say (perhaps based on our demographics, or some basic information about us) so we can have a pretty good idea whether box B has the money or not.

The only real controversy seems to be in the "omniscient being" version, which defies the basic presumption of free will.

Title: Re: Newcomb's Dilemma
Post by TenaliRaman on Jun 14th, 2005, 1:34am

on 06/14/05 at 01:20:57, Deedlit wrote:
The only real controversy seems to be in the "omniscient being" version, which defies the basic presumption of free will.

Lets consider the omniscient being version then. If this being can see into the future, his past will have predictions which are all correct. Does the question still say, "amazing accuracy"?? If it does, then u can simply replace computers with omniscient being with no change in the logic as to why one can go ahead and choose B.

Now lets say  we dont know whether he has been correct in all his predictions so far. Then we are left to choose either A or B and you can follow towr's suggestion of flipping a coin and choose one. Because if we are ready to accept that an omniscient being can exist which can predict our future, then there is no point in discussing free will.

-- AI

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 14th, 2005, 1:51am
I'm not following your logic on why we should choose B.  Surely the omniscient being is aware of your position, so he'll put nothing under box B.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 14th, 2005, 10:01am

on 06/13/05 at 16:51:14, Icarus wrote:
Jock - There is nothing here that indicates (A) => (B), as you have them stated.


I had the original version of Newcomb's Dilemma in mind for which certainly (A) => (B) : if a physical device can be constructed that is capable of predicting all my actions, then the choice that optimises my gain (i.e. selecting both boxes) will give me less than a sub-optimal choice (restricting myself to box B only).


on 06/13/05 at 16:51:14, Icarus wrote:
(B) is self-contradictory. The choice that optimizes gain is by definition the one that gives more.


Of course! (B) is false, and therefore (A) is false. That is exactly the point I wanted to make.

Again, you have to start from the original version of the 'paradox' for which the choice that optimises your gain is to grab what is in both boxes, and not to restrict yourself to one box.

If you want, you can reformulate (B) as: emptying the two boxes gives you less than emptying the box labelled 'B'. (Obviously, still a false statement. )

In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 14th, 2005, 10:58am

on 06/14/05 at 10:01:09, JocK wrote:
In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.
It is? When, why, where?

I really don't see why it should be a logical impossibility.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 14th, 2005, 11:58am

on 06/14/05 at 10:58:14, towr wrote:
It is? When, why, where?


Yes! One day ago, because of a Reductio ad absurdum, here on this forum...!  ;D



Title: Re: Newcomb's Dilemma
Post by JocK on Jun 14th, 2005, 12:15pm

on 06/13/05 at 16:51:14, Icarus wrote:
Nor do the conditions of this dilemma in any way violate causality. The machine predicts your behavior, but does this in the manner in which we regularly (and with great accuracy) currently predict future events all the time: by simulating the future evolution of systems from their current state according to established physical laws.


Ok, now the philosophical bit...

Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)

Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.

I guess that when you speak of  "free will" and I speak of "causality", we basically mean one and the same thing.

Hmmm.... have the feeling that this will not be the last post in this thread...  :)

Title: Re: Newcomb's Dilemma
Post by towr on Jun 14th, 2005, 3:03pm

on 06/14/05 at 11:58:24, JocK wrote:
because of a Reductio ad absurdum
I disagree that it reduces to something absurd.
If you were a computer there would be no problem with a bigger better faster computer predicting exactly what you'd do (if anything. There's always the halting problem, for example. But since the simulating computer is faster, it will know your decision before you, if there ever is one).
I don't find it logically inconsistent to consider people might be biological computers.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 14th, 2005, 3:14pm

on 06/14/05 at 12:15:54, JocK wrote:
Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)
I would think causality is that one thing necessarily leads to another.
Someone's will could cause something.


Quote:
Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.
Why?
I mean, aside from the fact that there is no meaning  without (free) will or soul or something else that provides meaning.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 14th, 2005, 3:24pm

on 06/14/05 at 15:03:14, towr wrote:
I disagree that it reduces to something absurd.


Are you serious? Isn't it obviously absurd when someone claims that two boxes can be prepared such that when you are given the choice of taking either

1) the contents of box 1, or

2) the contents of both boxes,

that you will end up with less if you grab the contents of both boxes?


on 06/14/05 at 15:03:14, towr wrote:
If you were a computer there would be no problem with a bigger better faster computer predicting exactly what you'd do


That remark starts with a very big IF...

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 14th, 2005, 4:11pm

on 06/14/05 at 10:01:09, JocK wrote:
Of course! (B) is false, and therefore (A) is false. That is exactly the point I wanted to make.

Again, you have to start from the original version of the 'paradox' for which the choice that optimises your gain is to grab what is in both boxes, and not to restrict yourself to one box.

If you want, you can reformulate (B) as: emptying the two boxes gives you less than emptying the box labelled 'B'. (Obviously, still a false statement. )

In any case, the assumption (A) on which the 'dilemma' is based ('a physical device can be constructed that is capable of predicting all my actions'), is logically proven to be false.


One should always be careful about proving things about the physical world by logical arguments.

You've made (B) sound false by oversimplifying the situation.  The argument for taking both boxes is that, once the money is there, there's no reason not to take both boxes.  The counter argument says nothing to contradict that;  it's based on the notion that, since the computer is able to read our actions 100% of the time, making the decision to take only one box will cause the computer to put more money in the first box.  The argument is often phrased to hide this strange-sounding causation - something like "With one decision you get $10,000, with the other you get a million, what's the problem?!!", but it's there nevertheless.

If you've ever seen "The Missing Link",  it's a game show in which contestants get to vote each other off between rounds, even though they cooperate during them to get the most money.  So, if you try to maximize the prize, that may cause your opponents to vote you off, and you end up with nothing.  Similarly, being the type of person who takes both boxes causes the computer to screw you, even though you're just making the rational decision.

It's true that there is a deep question in whether or not our actions are decided in advance.  If our brains function by electrical impulses that follow more or less classical laws of physics (i.e. it's too macroscopic for the Heisenberg uncertainty principle to factor into it), then all our actions are predetermined.  But then we have no real choices, which of course is completely antithetical to our entire mental process.

But there's no simple reductio ad absurdum like you describe.

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 14th, 2005, 4:15pm

on 06/14/05 at 12:15:54, JocK wrote:
Ok, now the philosophical bit...

Causality is the absence of free will in one time direction.  (You can influence future events, but not the past.)

Without such a thing as 'free will' the concept of causality is meaningless, and vice-versa.

I guess that when you speak of  "free will" and I speak of "causality", we basically mean one and the same thing.


Sorry, but this is pure sophistry.  You could just as well say:

Without such a thing as a "flat earth", the concept of a non-flat earth is meaningless.

I guess that when you speak of "flat earth" and I speak of "non-flat earth", we basically mean the same thing.


Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 15th, 2005, 8:36am
1) As others have pointed out, "taking both boxes getting you less than just taking one" is not paradoxical if your decision affects the contents of the boxes.
JocK's argument boils down to:
"Your choice cannot influence the contents of the boxes, therefore a situation that requires your choice to influence the contents of the boxes cannot arise, therefore, the hypothetical situation where your choice influences the contents of the boxes is impossible."
This is known as circular reasoning, or begging the question...

2) Free Will in the presence of omniscience is a tricky subject at best (there are philosophers that prefer to sidestep the entire issue by defining free will as the result of our normal decision making process, and not worry about whether or not that result is predetermined). On the other hand, total omniscience isn't required for the apparent paradox - merely limited omniscience good enough to predict your choice of boxes. Yes, the existence of such a prediction may limit our free will, but there are a number of other predictions that can be made with near 100% certainty, without threatening to overturn causality or disturb the assumption of free will - for instance, I predict that, within the next 24 hours from my writing this, no-one from this forum will stand on the moon, walk through a solid brick wall, or fly unaided. Being unable to choose a different box is no more of a threat to free will than being unable to walk through walls.

3) I'm sure everyone can agree that there's absolutely no paradox if, instead of the boxes being filled (based on your choice) before you choose, the boxes are filled (based on your choice) after you choose. The problem only comes from the order of events. However, from your viewpoint, the only difference between the two is that in one you're required to accept the existence of a perfect prediction - if the setup was that you were given the choice, told that your choice was used to determine the contents of the boxes, and then the boxes were produced after you'd chosen, I think everyone would agree that you make the choice not to try and fool the system of your own free will, regardless of the fact that the boxes have actually been sitting sealed backstage for the last 6 months since the machine predicted your choice (the fact taht,in this case, "the machine" consists of a single slip of paper with "Put a million in box B" scrawled on it is irrelevant). So what about being told your choice has been predicted makes the situation a threat to your free will? Now if you were told what the prediction was, then you'd have a limitation on your free will (or the machine wouldn't work)

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 15th, 2005, 10:48am

on 06/14/05 at 16:11:17, Deedlit wrote:
You've made (B) sound false by oversimplifying the situation.  The argument for taking both boxes is that, once the money is there, there's no reason not to take both boxes.  The counter argument says nothing to contradict that;  it's based on the notion that, since the computer is able to read our actions 100% of the time, making the decision to take only one box will cause the computer to put more money in the first box.  


You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.

Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!


on 06/14/05 at 16:11:17, Deedlit wrote:
If you've ever seen "The Missing Link",  it's a game show in which contestants get to vote each other off between rounds, even though they cooperate during them to get the most money.  So, if you try to maximize the prize, that may cause your opponents to vote you off, and you end up with nothing.  Similarly, being the type of person who takes both boxes causes the computer to screw you, even though you're just making the rational decision.


I have seen "The Missing Link", but never watched the a-causal version....


on 06/14/05 at 16:11:17, Deedlit wrote:
If our brains function by electrical impulses that follow more or less classical laws of physics (i.e. it's too macroscopic for the Heisenberg uncertainty principle to factor into it), then all our actions are predetermined.


A big IF, accompanied by a 'more or less' that requires definition.

One spanner (out of the many) that can be thrown in: at times when integrated-circuit designers start worrying about quantum mechanical effects, surely a claim that quantum-mechanical effects play no role whatsoever in the functioning of a human brain would be naive.



Title: Re: Newcomb's Dilemma
Post by JocK on Jun 15th, 2005, 10:56am

on 06/14/05 at 16:15:39, Deedlit wrote:
Sorry, but this is pure sophistry.  You could just as well say:

Without such a thing as a "flat earth", the concept of a non-flat earth is meaningless.

I guess that when you speak of "flat earth" and I speak of "non-flat earth", we basically mean the same thing.


Ok, no problem... you are allowed to write down this nonsense. (Having no free will whatsoever.... ) 

:P


Title: Re: Newcomb's Dilemma
Post by JocK on Jun 15th, 2005, 11:43am
Clearly so far I haven't convinced any of you, but let me try to make one (last!) attempt:

You are in a big television studio facing two boxes. You can not see the contents of the boxes, but the public - watching from the side - can. You are told by the quizzmaster that one of them might contain a million dollars. You have a choice between taking the contents of both boxes, or alternatively the contents of one of the boxes. What choice do you make?

Wait a second, there is a snag to it: you are also told that the boxes were filled a year before based on a computer prediction of what you would do. If you were predicted to grab both boxes, no money was put in either of them. If you were predicted to take one of the boxes, a million dollors was put in that very box.

So again: what choice do you make?

OK, you have always been a modest person, and also this time you decide to go for one box. However, just as you are about to say "I would just like to have the contents of the left box please", a cosmic particle enters the studio and hits your brain, triggering a chain of events leading to the words "I would like to have the contents of both boxes please!" leaving your mouth.

What happens? Will the audience in the studio see a million dollars evaporate from the left box? Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead...



And finally, please answer honestly: would any of you under the given circumstance select only one box?




Title: Re: Newcomb's Dilemma
Post by towr on Jun 15th, 2005, 12:51pm

on 06/15/05 at 10:48:22, JocK wrote:
You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.
Fortunately that decision is predestined.


Quote:
Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!
The phrasing might be a little inaccurate. But the same things that will inevitably cause you to reach your decisions exist before the computer makes it's prediction, and aside from causing your decision cause the computer to fill the boxes appropriately.


Quote:
A big IF
Considering it's presupposed in the problem, it's hardly matters whether it is actually the case, just that it is contingent.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 15th, 2005, 1:05pm

on 06/15/05 at 11:43:29, JocK wrote:
What happens? Will the audience in the studio see a million dollars evaporate from the left box?
Of course not, no more so than in the original problem.
If the prediction was you taking both boxes, then there would never heve been a million to 'evaporate' in the first place.


Quote:
Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead...
It only needs to predict a summary of your state. You could take issue with it's error rate. But if the presupposition is that the prediction is correct, by whatever means, then that's not an issue.


Quote:
And finally, please answer honestly: would any of you under the given circumstance select only one box?
Depends on how much faith I have in the prediction. Or in fact what I'd predict the prediction to be.

Naturally once the boxes are set up, then chosing one option or the other doesn't change what's inside them.
However, if I'm always inclined to take just one (and the computer knows this), then the computer would predict I take only one and in fact I'd take only one.
And if I'm always inclined to choose both, they'd be empty.

More problemetic is that it doesn't really matter whether the computer correctly predict your behaviour. If it simply always predicts you'd take both, you can never win anything in this case.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 15th, 2005, 2:58pm
Quote JocK: "Or were both boxes empty from start, as the computer a year ago did predict correctly the state of the whole universe? But then it must be capable of predicting its own state a year ahead... "

Quote Towr: "It only needs to predict a summary of your state. You could take issue with it's error rate. But if the presupposition is that the prediction is correct, by whatever means, then that's not an issue."



OK, so we agree that this would lead to the conclusion that the computer is capable of predicting the whole universe including its own behaviour? (Remember: the physical universe is a K-system with a strongly mixing phase space.)

Well then... what I didn't tell you is that more than a year ago I constructed an exact copy of that computer. I used this copy (let's call it "comp B") to predict what the original computer ("comp A") would predict I would do with the boxes.

And before I used comp B to predict comp A's behaviour, I have made up my mind:

- If comp A would predict I would open both boxes, I will choose only one box (let's say the leftmost box).

- If, however, comp A would predict I would open only one of the boxes, I will open both boxes.

Now what prediction will the comp A make?

Indeed: just like "a barber who shaves all men who don't shave themselves, and no-one else" can not exist, in the same way a computer that can predict human behaviour can not exist.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 15th, 2005, 3:29pm
No, the conclusion should be that no two such machines can exist without contradicting the premise that they can make a prediction about your behaviour if you have access to one of them. Not that one can't exist period.

And I still disagree the computer would have to be able to predict the whole universe. You're not that complicated. If just one or a handfull of particles from space hitting your brain would change your behaviour into the opposite, then you'd be a lot more wishy-washy. (Although that's going far too far into the physical for a thought experiment anyway. )

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 15th, 2005, 8:09pm

on 06/15/05 at 10:48:22, JocK wrote:
You seem to miss my point. Remember causality? The computer has to go first. Subsequently I decide whether to take both boxes or not.

Ability to read my future actions? My future decision causing the computer to do something? Must be a strange world you live in!


Ah, but I was talking about your argument.  You claimed there was a contradiction, based on the following:

a)  Picking both boxes causes the player to end up with more money than if he just picked one.

b) Picking one box causes the player to end up with more money than if he picked both.

Now, how do you justify b?  You can try to explain in all kinds of ways, but at the root there has to be an implication "picking one box" -> "there's a million dollars in that box".  If you completely deny that connection, then there's no reason in the world not to pick both boxes.

From your line of thinking above - that the money is already there, and our choice has no effect on it - I can't imagine why you would hesitate in your choice.  Just take both.


Quote:
I have seen "The Missing Link", but never watched the a-causal version....


That would be pretty interesting.  "Aha, I see you backstabbed me in the future, but I'll beat you to the punch!"


Quote:
One spanner (out of the many) that can be thrown in: at times when integrated-circuit designers start worrying about quantum mechanical effects, surely a claim that quantum-mechanical effects play no role whatsoever in the functioning of a human brain would be naive.


Perhaps.  But let me clarify what I meant by "more or less".  According to quantum mechanics, particles are limited in how accurately their position and velocity can be measured;  this has some eerie consequences about the real world.  For example, if we are standing next to a wall, with positive probability we should suddenly jump to the other side of the wall.  However, the probability of this occuring is so low that, for all practical purposes, we can presume that we'll stay on the same side of the wall.

Perhaps the same is true with regard to our brains and their decision making.  Yes, there is some fundamental quantum uncertainty involved;  but perhaps, like our bodies jumping through the wall, the amount of deviation required to change a decision causes the probability to be negligible - i.e. it probably won't happen even once in our entire lives.

In any case, it seems that quantum indeterminacy doesn't really settle the 'free will' vs. 'determinism' issue.  Which I believe is basically unresolvable, since you can't "play reality twice" and see if the same things happen again.  (And yes, this last statement contains some ambiguities, which is precisely the problem.)

Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 15th, 2005, 8:14pm

on 06/15/05 at 10:56:30, JocK wrote:
Ok, no problem... you are allowed to write down this nonsense. (Having no free will whatsoever.... ) 

:P


So you agree that your "free will exists because causality does" reasoning is nonsense?



Title: Re: Newcomb's Dilemma
Post by Deedlit on Jun 15th, 2005, 8:19pm

on 06/15/05 at 15:29:51, towr wrote:
And I still disagree the computer would have to be able to predict the whole universe. You're not that complicated. If just one or a handfull of particles from space hitting your brain would change your behaviour into the opposite, then you'd be a lot more wishy-washy. (Although that's going far too far into the physical for a thought experiment anyway. )


Well, there's a problem with the butterfly effect.  The typical example is the weather, although it applies to just about anything:  even the most minute change in the initial conditions can have drastic changes in the long run.

So maybe a stray particle would cause a storm to occur, which causes your spouse to get killed in a car crash.  This would obviously have huge consequences on your metal state, and would certainly affect your decision on which boxes to pick.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 16th, 2005, 1:41am

on 06/15/05 at 20:19:13, Deedlit wrote:
Well, there's a problem with the butterfly effect.  The typical example is the weather, although it applies to just about anything:  even the most minute change in the initial conditions can have drastic changes in the long run.
Long run, yes. But not within a minute. Besides it depends on how stable the system is. Chaos doesn't really apply in computers for instance, there is no butterfly effect in them, we need them stable. (Even in some new chips where the butterfly is exploited, the chip behaviour remains stable. We couldnt' use it otherwise.)


Quote:
So maybe a stray particle would cause a storm to occur
It's very doubtfull the creation of a storm depends on one particle. If anything, it will only change when the storm occurs, not if.

But I suppose you're right that deterministic chaos may make prediction of any behaviour difficult if not impossible.
However, we're dealing with a thought experiment. We can imagine a world were all this is not an issue; where the computer simply does make the correct prediction assuming you ever make a decision.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 10:57am


on 06/15/05 at 15:29:51, towr wrote:
And I still disagree the computer would have to be able to predict the whole universe. You're not that complicated. If just one or a handfull of particles from space hitting your brain would change your behaviour into the opposite, then you'd be a lot more wishy-washy. (Although that's going far too far into the physical for a thought experiment anyway. )

on 06/15/05 at 20:19:13, Deedlit wrote:
Well, there's a problem with the butterfly effect.  The typical example is the weather, although it applies to just about anything:  even the most minute change in the initial conditions can have drastic changes in the long run.

So maybe a stray particle would cause a storm to occur, which causes your spouse to get killed in a car crash.  This would obviously have huge consequences on your metal state, and would certainly affect your decision on which boxes to pick.

on 06/16/05 at 01:41:53, towr wrote:
Long run, yes. But not within a minute. Besides it depends on how stable the system is. Chaos doesn't really apply in computers for instance, there is no butterfly effect in them, we need them stable. (Even in some new chips where the butterfly is exploited, the chip behaviour remains stable. We couldnt' use it otherwise.)


I agree with Deedlit. My previous remark about the fact that the phase space of physical systems show strongly mixing behaviour is the same as saying: small causes have big effects.

And yes, the butterfly effect only applies long-run, but compared to the 'clock-frequency' of the human brain any macroscopic time (and certainly a minute)is huge.



Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 11:06am

on 06/16/05 at 01:41:53, towr wrote:
But I suppose you're right that deterministic chaos may make prediction of any behaviour difficult if not impossible.
However, we're dealing with a thought experiment. We can imagine a world were all this is not an issue; where the computer simply does make the correct prediction assuming you ever make a decision.


What a strange thought that you would be able to exist in a world that is incapable of complex behaviour....  :o

But I am happy with the end-conclusion: we all seem to agree that a computer capable of predicting human behaviour can not be constructed in the universe we live in.





Title: Re: Newcomb's Dilemma
Post by towr on Jun 16th, 2005, 11:25am

on 06/16/05 at 10:57:16, JocK wrote:
And yes, the butterfly effect only applies long-run, but compared to the 'clock-frequency' of the human brain any macroscopic time (and certainly a minute)is huge.
The 'clock frequency' of the human brain isn't even 1000 Hz.
And it's too large scale for a few stray particles to change the global state that quickly, if at all. (I'd sooner believe the effect ripples out than explodes)

Chaos isn't magic. The flap of a butterfly's wing can't create storms in places were storms can't exists. It only changes where in orbit around the strange attractor we are.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 16th, 2005, 11:27am

on 06/16/05 at 11:06:44, JocK wrote:
What a strange thought that you would be able to exist in a world that is incapable of complex behaviour....  :o
Where did I ever say there couldn't be complex behaviour?


Quote:
But I am happy with the end-conclusion: we all seem to agree that a computer capable of predicting human behaviour can not be constructed in the universe we live in.
I don't remember ever agreeing to that. It may be difficult or impossible, it might also turn out to be quite doable. Really depends on how predictable we are. And what we limit ourself to in the prediction.
You seem quite adament to take both boxes, for example.

Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 16th, 2005, 11:56am
Let's see if I have this straight: JocK is convinced that the device capable of predicting a single human decision is physically (or possibly logically) impossible.

I'm curious as to just where he draws the line - which of the following are and aren't possible when the subject is faced with a simple choice with well-defined consequences:

1) predicting someone's behaviour when they have no reason to try and be unpredictable and don't know you've made a prediction

2) predicting someone's behaviour when they have no reason to try and be unpredictable and do know you've made a prediction

3) predicting someone's behaviour when they have no reason to try and be unpredictable and know what you predicted

4) predicting someone's behaviour when they have reason to try and be unpredictable but don't know you've made a prediction

5) predicting someone's behaviour when they have reason to try and be unpredictable and do know you've made a prediction

6) predicting someone's behaviour when they have reason to try and be unpredictable and don't know what you've predicted


Among other things, this suggests that time travel is impossible (a device capable of sending information back in time can easily be used to predict someone's decision)

On the other hand, last I heard, Quantum effects include the potential existence of time loops, wormholes, etc (all small enough not to have practical applications, but hinting at possibilities) - unless something new has come up in the past 5 years or so to rule it out, it looks like the physics says "It doesn't happen" not "It couldn't happen" - in fact, there are a number of devices that could theoretically be constructed that, according to known physics (as of 10 years ago or so) would work as time machines. Since the simplest involve manipulating large masses of super-dense material so that they approach the speed of light, it seems unlikely that they'll be built any time soon, but they are theoretically possible.


The interesting thing about time travel under current (again as of about 10 years ago) theory is that, if you look at it closely, all the apparent paradoxes go away - if you try to shoot your grandfather, you'll fail for some (intuitively) low probability reason - your freedom to act is limited by your knowledge of the future, and you can't actually create a paradox.

Related, to my mind, it's not the existence of predictions about the future, or even knowledge of the existence of predictions that causes paradoxes, but knowledge of the content of predictions of our own actions that can lead to paradox.

Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 16th, 2005, 11:59am

on 06/16/05 at 11:27:24, towr wrote:
You seem quite adament to take both boxes, for example.

Yes, if you are known to dis-believe in the correctness of the machine's predictive capabilities, then your actions are easy for a machine to predict - even as simple a machine as a slip of paper...

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 12:54pm

on 06/16/05 at 11:59:18, rmsgrey wrote:
Yes, if you are known to dis-believe in the correctness of the machine's predictive capabilities, then your actions are easy for a machine to predict - even as simple a machine as a slip of paper...


Known by whom or what?

Let me get this right: you are referring to a slip of paper knowing I don't believe in its predictve capabilities?

???


Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 1:11pm

on 06/16/05 at 11:27:24, towr wrote:
<predicting human behaviour> might also turn out to be quite doable. Really depends on how predictable we are. And what we limit ourself to in the prediction.
You seem quite adament to take both boxes, for example.


1) Apparently you are now referring to the circumstance in which one and the same person (me) is subjected to a repeat experiment? And in doing so you have made the transition from 'predicting future human behaviour' into 'extrapolating trends in human behaviour'. An entirely different subject that is not under discussion.

2) Would I be subject to a repeat experiment, as a disbeliever I would expect that the computer is pre-programmed to put nothing in the boxes. So I would definitely once in a while select a single box. Just to prove the claim is absolute nonsence.



Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 1:32pm

on 06/16/05 at 11:56:34, rmsgrey wrote:
...  which of the following are and aren't possible when the subject is faced with a simple choice with well-defined consequences:

1) predicting someone's behaviour when they have no reason to try and be unpredictable and don't know you've made a prediction

2) predicting someone's behaviour when they have no reason to try and be unpredictable and do know you've made a prediction

3) predicting someone's behaviour when they have no reason to try and be unpredictable and know what you predicted

4) predicting someone's behaviour when they have reason to try and be unpredictable but don't know you've made a prediction

5) predicting someone's behaviour when they have reason to try and be unpredictable and do know you've made a prediction

6) predicting someone's behaviour when they have reason to try and be unpredictable and don't know what you've predicted


I assume you are referring to some simple choice between alternatives that are more-or-less equally attractive so that free will indeed comes into action? Well, in that case in all of the 6 cases above a machine can not make an accurate prediction.



on 06/16/05 at 11:56:34, rmsgrey wrote:
Among other things, this suggests that time travel is impossible (a device capable of sending information back in time can easily be used to predict someone's decision)

On the other hand, last I heard, Quantum effects include the potential existence of time loops, wormholes, etc (all small enough not to have practical applications, but hinting at possibilities) - unless something new has come up in the past 5 years or so to rule it out, it looks like the physics says "It doesn't happen" not "It couldn't happen" - in fact, there are a number of devices that could theoretically be constructed that, according to known physics (as of 10 years ago or so) would work as time machines. Since the simplest involve manipulating large masses of super-dense material so that they approach the speed of light, it seems unlikely that they'll be built any time soon, but they are theoretically possible.


The interesting thing about time travel under current (again as of about 10 years ago) theory is that, if you look at it closely, all the apparent paradoxes go away - if you try to shoot your grandfather, you'll fail for some (intuitively) low probability reason - your freedom to act is limited by your knowledge of the future, and you can't actually create a paradox.


Lots of people are playing with various concepts, but there is no widely accepted quantum gravity theory.

I am very interested to get a reference to a paper that describes the mechanism by which my knowledge of the future would limit my free will.



on 06/16/05 at 11:56:34, rmsgrey wrote:
Related, to my mind, it's not the existence of predictions about the future, or even knowledge of the existence of predictions that causes paradoxes, but knowledge of the content of predictions of our own actions that can lead to paradox.


If it is possible to predict the future, you can also predict the content of such a prediction. (See my example of the 2nd computer.)



Title: Re: Newcomb's Dilemma
Post by towr on Jun 16th, 2005, 1:47pm

on 06/16/05 at 13:11:14, JocK wrote:
1) Apparently you are now referring to the circumstance in which one and the same person (me) is subjected to a repeat experiment?
No. Unless that's what you were talking about while making your case for taking both boxes.
You pretty much said what you would do; that makes it very easy to predict, unless you don't know what you'd do either.


Quote:
And in doing so you have made the transition from 'predicting future human behaviour' into 'extrapolating trends in human behaviour'. An entirely different subject that is not under discussion.
Predicting according to a trend seems a very valid method imo. Prediction is not necessarily foreseeing truth, after all.
And if the trend is incredibly strong, the prediction will be incredibly accurate. Just on that cue alone.
The better you know someone, the better you can predict his/her behaviour. And in general there are all sorts of limitations and guides influencing peoples behaviour.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 16th, 2005, 2:00pm

on 06/16/05 at 13:47:50, towr wrote:
You pretty much said what you would do; that makes it very easy to predict, unless you don't know what you'd do either.


Aha.... that incredible machine can predict my future behaviour if I honestly tell it in advance what I'm gonne do...?

Ok, I think I can build such a machine...

But what does this have to do with Newcomb's paradox???


on 06/16/05 at 13:47:50, towr wrote:
Predicting according to a trend seems a very valid method imo.  


So, you never had a girlfriend..?  ;D

Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 19th, 2005, 10:52am

on 06/16/05 at 13:32:51, JocK wrote:
I assume you are referring to some simple choice between alternatives that are more-or-less equally attractive so that free will indeed comes into action? Well, in that case in all of the 6 cases above a machine can not make an accurate prediction.


Actually, I was thinking of a situation where there is a clear "better" outcome - as in Newcomb's Dilemma, where your best course of action is clear if you believe in the machine's predictive capabilities, and clear if you disbelieve in the machine's predictive capabilities.


Quote:
Lots of people are playing with various concepts, but there is no widely accepted quantum gravity theory.

I am very interested to get a reference to a paper that describes the mechanism by which my knowledge of the future would limit my free will.


"Billiard Balls in Wormhole Spacetimes with Closed Timelike Curves - Classical Theory" by Fernando Echeverria Gunnar Klinkhammer and Kip S. Thorne to be found in Physical Review, D44, #4, pp1077-1099 (15th August 1991) - which I haven't read myself, but I have read one of its sources, the Science Fiction novel "Timemaster" by Robert L Forward, and the same author's later treatment of the same concepts in his mixed Science Fact/Fiction book "Indistinguishable From Magic" published 1995 (ISBN 0-671-87686-4). In the years since encountering the idea first, I have kept half an ear on popular science publications, and have yet to encounter a reported theory that reliably prohibits (very) short-term reverse causation - I do remember seeing atleast one TV appearance of an excited physicist explaining that every time they tried allowing time travel under Quantum Mechanics, the dtailed calculations showed cancellation of the probability waves for paradoxical sequences, and reinforcement for non-paradox (in exactly the same way as the position wave of an electron cancels out between allowed orbits).


Quote:
If it is possible to predict the future, you can also predict the content of such a prediction. (See my example of the 2nd computer.)

So consider Schrodinger's Cat with time travel available. After opening the box to find out whether the cat survived, you put one of two cards into a sealed envelope and send it back in time to just before you began the experiment. Until you open the envelope or the box (assuming the envelope was sent back in a foolproof fashion, so there's no chance of its contents being wrong) the contents of each are in quantum superposition. Opening either one will then tell you the state of the other.

It's not the fact of the prediction that constrains your free will, but knowledge of its contents. Until you know what has been predicted, your future actions exist in a superposed state of all possible futures. Discovering a solid fact about the future hen collapses the superposition down to a much narrower range of possibilities.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 20th, 2005, 1:40pm

on 06/19/05 at 10:52:57, rmsgrey wrote:
Actually, I was thinking of a situation where there is a clear "better" outcome - as in Newcomb's Dilemma, where your best course of action is clear if you believe in the machine's predictive capabilities, and clear if you disbelieve in the machine's predictive capabilities.


This is a mis-conception. I tried to explain that before, but maybe I was not clear enough.

I definitely disbelieve the machine's predictive capability, and therefore would reason as follows:

the makers of the machine know - just as I do - that their claim is absolute nonsense. So, if they would put money in one of the boxes they would lose out. Hence, their only hope can be that all candidates - when given the choice between opening one box or both boxes - will open both. That way, they get away with preparing empty boxes and still maintaining their false claim about the machine's predictive capabilities.

Therefore, I would certainly not exclude the possibility that a disbeliever would open one box only. Definitely, if the experiment repetitively 'confirms' the machine's predictive capability due to all candidate opening two boxes and getting nothing, a disbeliever would be very tempted to open one box only. After all, he has nothing to loose, and the only thing to gain is the satisfaction of demonstrating these guys are selling snake oil...





on 06/19/05 at 10:52:57, rmsgrey wrote:
"Billiard Balls in Wormhole Spacetimes with Closed Timelike Curves - Classical Theory" by Fernando Echeverria Gunnar Klinkhammer and Kip S. Thorne to be found in Physical Review, D44, #4, pp1077-1099 (15th August 1991) - which I haven't read myself, but I have read one of its sources, the Science Fiction novel "Timemaster" by Robert L Forward, and the same author's later treatment of the same concepts in his mixed Science Fact/Fiction book "Indistinguishable From Magic" published 1995 (ISBN 0-671-87686-4).


Interesting paper. However, it only demonstrates that in one simple case a time-travel paradox can be 'repaired'. (It discussed a bal that moves in solitude, enters a wormhole, comes out at an earlier time, and collides with its younger self at that earlier time thereby (potentially) preventing the ball to enter the wormhole. The paper demonstrates that there are infinitely many solutions that allow for a consistent (paradox free) course of action in which the ball indeed enters the wormhole after a glancing collision.)

The paper does not demonstrate that 'something' prevents the ball to collide with its younger self in such a way that both balls don't enter the wormhole.



Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 21st, 2005, 3:42am

on 06/20/05 at 13:40:16, JocK wrote:
This is a mis-conception. I tried to explain that before, but maybe I was not clear enough.

I definitely disbelieve the machine's predictive capability, and therefore would reason as follows:

the makers of the machine know - just as I do - that their claim is absolute nonsense. So, if they would put money in one of the boxes they would lose out. Hence, their only hope can be that all candidates - when given the choice between opening one box or both boxes - will open both. That way, they get away with preparing empty boxes and still maintaining their false claim about the machine's predictive capabilities.

Therefore, I would certainly not exclude the possibility that a disbeliever would open one box only. Definitely, if the experiment repetitively 'confirms' the machine's predictive capability due to all candidate opening two boxes and getting nothing, a disbeliever would be very tempted to open one box only. After all, he has nothing to loose, and the only thing to gain is the satisfaction of demonstrating these guys are selling snake oil...

The two box version I know has $1000 in box B, and the choice between opening box A alone ($1,000,000) or opening both (box A empty) - in such a situation, a disbeliever believes you're paying $1000 for the pleasure of proving their predictions wrong.



Quote:
Interesting paper. However, it only demonstrates that in one simple case a time-travel paradox can be 'repaired'. (It discussed a bal that moves in solitude, enters a wormhole, comes out at an earlier time, and collides with its younger self at that earlier time thereby (potentially) preventing the ball to enter the wormhole. The paper demonstrates that there are infinitely many solutions that allow for a consistent (paradox free) course of action in which the ball indeed enters the wormhole after a glancing collision.)

The paper does not demonstrate that 'something' prevents the ball to collide with its younger self in such a way that both balls don't enter the wormhole.

As I said, I haven't read the paper myself, and can't remember actual references.

10 minutes with google turns up:

Time machines: the Principle of Self-Consistency as a consequence of the Principle of Minimal Action (http://arxiv.org/abs/gr-qc/9506087)

and

Time machines and the Principle of Self-Consistency as a consequence of the Principle of Stationary Action (II): the Cauchy problem for a self-interacting relativistic particle (http://arxiv.org/abs/gr-qc/9607063)

refined and more extensive googling also uncovers:
Cauchy problem in spacetimes with closed timelike curves (http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1990PhRvD..42.1915F&db_key=AST)

Title: Re: Newcomb's Dilemma
Post by Ajax on Jun 21st, 2005, 5:09am
First of all, let's assume that our "software" (like bios or windows, am not sure which one is more suitable) is continuously changing as it is affected by external (sound, images, smell etc) and internal (emotions, thoughts) factors. This would make almost impossible to create an identical clone of us (if you've seen "The boys from Brazil" nazis tried to raise a new Hitler, by trying to give to young clones of him similar experiences of his). Even the slightest experience could be crucial to one's future; a thought caused by a minor incident could ignite a series of other thoughts that could end up to a great discovery (doesn't the legend say that Newton came up with the Universal Law of Gravitation because of an apple? ok maybe too much of a myth and less of truth).

However, let's say that from the moment that you are born, sensors have been attached to your brain, your spinal cord and in general to any spot needed so as to receive all the external data that you get. Let's also assume that there exists a certain decoder that is capable of transforming this information into a signal fully comprehensible by a computer and that thoughts and emotions can be read. Then, let's finally assume that there exists a computer with structure like a human brain and with intelligence identical to yours (My belief is that eventually there will be artificial intelligence equal and maybe greater than ours). This "brain" will believe that it is you and with continuous rectifications and program adjustments could eventually become very much a second you.
Now, if it were disconnected slightly before you took your final decision (so that it doesn't know) and for this part of time you were isolated, would it be able to make the same guess? Even the random picking of a box is not that random (if you don't use a coin, but you guess it).
I'd say yes.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 21st, 2005, 3:51pm

on 06/21/05 at 05:09:17, Ajax wrote:
... would it be able to make the same guess? Even the random picking of a box is not that random (if you don't use a coin, but you guess it).
I'd say yes.


This is remarkable. That amazing machine can predict the behaviour of a system as complicated as a living human being, but fails to predict the outcome of a simple coin toss...?


Interestingly, that brings me to to the following argument against the possibility of predicting the behaviour of a human:

A physicist enters the stage to participate in "Newcomb's Quizz". At the moment supreme she first opens the small shielded cage she carried to the stage. This cage - prepared just before the quizz by the physicist herself - contains Schrodinger's cat. When the cat is dead she opens two boxes, and when the cat is alive she opens one box.

How will that predictive machine handle the fundamental uncertainty caused by quantum superposition?

[Of course she doesn't really need to sacrifice a cat for this experiment. Any microscopic system (e.g. an electron) in a superposition of two quantum states (spin-up or spin-down) subjected to a measurement to determine the actual state (the Stern-Gerlach measurement) will do.]


Title: Re: Newcomb's Dilemma
Post by Ajax on Jun 22nd, 2005, 12:20am
That's why I wrote not to be influenced by some external factor (coin flip, cat, electron, whatever), but at that moment make a selection after some thoughts or impulsively. Do you believe that if you were asked to choose between one or two, just by chance, your choice would have been taken totally by chance? Are there unknown factors that would push you take one of the two choices (if you ask someone to pick a colour quickly, most probably he'll tell red)? Is it really 50-50? I'm only suggesting, I have not any well based opinion on that.

Anyway, as for the problem itself, assuming that the computer may be able to predict which box you'll open, but not predict or understand what you're thinking of, I'd say "I select box A and my prize is the opposite of what the box B has, which I open to show you". It reminds me of a riddle in a comic novel "The Technopriests" (Les technoperes in French) by Alexandro Jodorowsky and Zoran Janjetov (I think it is in vol 2)

Title: Re: Newcomb's Dilemma
Post by towr on Jun 22nd, 2005, 12:26am

on 06/21/05 at 15:51:53, JocK wrote:
How will that predictive machine handle the fundamental uncertainty caused by quantum superposition?
It needn't bother. It is in superposition itself, entangled with the cat and the content of the boxes.
Observation of any of the three entangled elements fixes the state of the other two to something consistent.

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 22nd, 2005, 10:14am

on 06/22/05 at 00:26:54, towr wrote:
It needn't bother. It is in superposition itself, entangled with the cat and the content of the boxes.
Observation of any of the three entangled elements fixes the state of the other two to something consistent.


How do you get your computer in a quantum-entangled state with my carefully predicted spin-1/2 system that I brought to the scene from a far away location? Are you basically saying that your computer can be brought in entanglement with the whole cosmos? A few postings ago you claimed that to predict human behaviour the computer needn't bother about the whole universe, and now you claim the computer even to be in quantum superposition with the whole universe?

Anyway, we are making progress: to predict human behaviour, initially claims were made that we need just a powerful classical computer; now - in a last (?) attempt to defend the suggestion that predicting human behaviour might theoretically be possible - we find ourselves heavily relying on quantum computing at a scale covering the whole universe...


Title: Re: Newcomb's Dilemma
Post by rmsgrey on Jun 22nd, 2005, 11:08am

on 06/22/05 at 10:14:44, JocK wrote:
How do you get your computer in a quantum-entangled state with my carefully predicted spin-1/2 system that I brought to the scene from a far away location? Are you basically saying that your computer can be brought in entanglement with the whole cosmos? A few postings ago you claimed that to predict human behaviour the computer needn't bother about the whole universe, and now you claim the computer even to be in quantum superposition with the whole universe?

Anyway, we are making progress: to predict human behaviour, initially claims were made that we need just a powerful classical computer; now - in a last (?) attempt to defend the suggestion that predicting human behaviour might theoretically be possible - we find ourselves heavily relying on quantum computing at a scale covering the whole universe...

I still believe a time-loop would work - by forcing self-consistency (though it appears self-consistency is only one of several hypotheses advanced to cope with pre-emptive matricide - and the only one that allows genuine time loops rather than invoking parallel universes or some form of cosmic censorship)

Anyway, you and towr appear to be at cross purposes - towr is apparently talking about an infallible black box and the consequences of its ability to predict, while you are attempting to prove the impossibility of such a device (when the relevant underlying issues are still open questions).

Establishing the behaviour of such a device may help settle the question of whether it's (logically) possible, but establishing (physical) impossibility doesn't settle questions of behaviour if you assume such a device.

Title: Re: Newcomb's Dilemma
Post by towr on Jun 22nd, 2005, 11:30am

on 06/22/05 at 10:14:44, JocK wrote:
How do you get your computer in a quantum-entangled state with my carefully predicted spin-1/2 system that I brought to the scene from a far away location? Are you basically saying that your computer can be brought in entanglement with the whole cosmos?
Well, if that's what it takes before you'll assume the premises and consider the problem, sure.


Quote:
A few postings ago you claimed that to predict human behaviour the computer needn't bother about the whole universe, and now you claim the computer even to be in quantum superposition with the whole universe?
The whole universe is one large superposition. There is no one outside it to observe it after all.
If we put the box with schrodingers cat in a room, and lock schrodinger in there. Not only don't we know if the cat is alive or dead, but neither do we know if schrodinger opened the box or not. All combinations are possible, and have some likelyhood. It's one big superposition which won't collapse until it is observed by someone 'outside the box'.
Of course you can repeat this as many times as you have observers. The waveform changes dramatically with everything you add, but it will remain in some superposition for a potential outside observer.


Quote:
Anyway, we are making progress: to predict human behaviour, initially claims were made that we need just a powerful classical computer;
Oh I still believe we can for all practical purposes, with a sufficiently sophisticated computer(+software), predict peoples' behaviour. As long as they are themselves the source of the behaviour, and not just doing what a random particle tells them to.
For that we'd need either time-travel, supernatural forsight, or discover some deterministic level beneath quantum mechanics.


Quote:
now - in a last (?) attempt to defend the suggestion that predicting human behaviour might theoretically be possible - we find ourselves heavily relying on quantum computing at a scale covering the whole universe...
Heh, prediction (some) human behaviour is very easy. I predict this won't be the last post.
(By all means, post if you disagree with that prediction ;D)

Title: Re: Newcomb's Dilemma
Post by JocK on Jun 22nd, 2005, 1:09pm

on 06/14/05 at 12:15:54, JocK wrote:
Hmmm.... have the feeling that this will not be the last post in this thread...  :)



on 06/22/05 at 11:30:54, towr wrote:
I predict this won't be the last post.



Copycat predictions don't count!  :P






Title: Re: Newcomb's Dilemma
Post by Brian on Oct 11th, 2005, 10:10am
I believe Smullyan mentions the omniscient/choice paradox in one of his books, and he points out (like Jock) that it seems logically contradictory.

Suppose the computer is going to predict whether you'll have eggs or toast for breakfast, and included in the manifold inputs to your supposedly deterministic brain is the computer's prediction.  That is, you're told the prediction before you get into the kitchen.

It's hard to disprove the theoretical possibility that I don't have free will -- maybe my brain is really just a complicated billiard table and my feeling of free will is an illusion.  But I know that I can be stubborn, and deliberately choose the opposite of whatever prediction is given to me, and thereby make it impossible for the computer to make an accurate prediction.

Title: Re: Newcomb's Dilemma
Post by towr on Oct 11th, 2005, 10:23am
If you violate the premise that the prediction is accurate, sure, then it's illogical. Violating a premise always is.

But it's conceivable that you might act as predicted, so under the premise that the prediction is accurate, logically, you must do what's predicted.

Title: Re: Newcomb's Dilemma
Post by Sjoerd Job Postmus on Oct 11th, 2005, 11:06am
Funnily enough, this AI computer can only tell the actions of the person who build it...

On the other side, asking a question about: "What will I do", will only work if the computer realizes that your action doesn't depend on the answer, or that you will do exactly what the computer says. If it knows you will avoid doing what it says you're going to do, it must go into an infinite loop.

So, if the computer has only been asked: "What box will <name> choose?", the computer will just give the correct answer. If it is told you will also get the answer, it'll loop.

Funnily enough, the computer has been asked: "What box will <name> choose?", so what you'll do, is choose to discard box A :)

j/k. The richer you are, the more likely you are to take a risk.

Poor person takes box A.
Extremely rich person takes box B.

Title: Re: Newcomb's Dilemma
Post by sreenivasan on Oct 12th, 2005, 5:51am
the best possible solution for this riddle would be to take the box which u didnt open...this makes perfect sense because if u had opened A then u get 2 take B and if u had opened B then u get to take A.so i guess u should either negotiate with the person offering u this offer or switch the labels on the boxes and then open it.

Title: Re: Newcomb's Dilemma
Post by Grimbal on Oct 12th, 2005, 9:14am
The computer would have predicted that, wouldn't it?

Title: Re: Newcomb's Dilemma
Post by sreeni_rox on Oct 12th, 2005, 3:10pm
here the catch is the computer will predict the box which u r going to open and not the box which you r going to  take...so if u say that"u will take the box which u didnt open" then...the problem still survives....let us say that u r the person who is goin 2 open the box.u claim that u will be taking the box which which is not opened by you irrespective of its contents...u enter the room and open the box named A.in this case the computer would have already predicted that u will open the box named A and box B will contain a billion dollars...but because u have already claimed that u will be taking the box u didnt open...u will take the box B which contains a billion dollars...i guess this explains my solution...i am wrong please let me know the correct solution...Cheers!!!

Title: Re: Newcomb's Dilemma
Post by mexican on Dec 5th, 2005, 2:31pm
Let us assume the computer is fallible (amazing accuracy), though we are uncertain of the amount of such. Accepting that presupposition, I think I can increase the inherent unreliability of the computer's prediction by choosing to base my judgement on a random occurence, such as the flip of a coin, and choosing B. Flipping a coin N times and choosing B should increase the likelyhood that the computer chose A.



Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board