21 of 21
21
Responsibility without free will
Posted: 29 November 2007 03:38 PM   [ Ignore ]   [ # 301 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3349
Joined  2007-11-21
StephenLawrence - 29 November 2007 02:43 PM
Bryan - 29 November 2007 02:26 PM

Causal determinism, by definition, states that the condition of the world at A can only result in one state of the world at subsequent moment B.
http://plato.stanford.edu/entries/determinism-causal/

You’re telling us that causal determinism could be false, while the condition of the world at A can only result in subsequent state B.

That is complete and utter nonsense.  Put another way, you’re saying that causal determinism can be true and ~true at the same time and in the same sense.

No I’m not, I’m just saying that if the state of the world at subsequent moment B is not a result of the condition of the world at A, then there is no reason to suppose that it could be other than it is.

That is an extremely implausible explanation of the statement below (which should look very familiar to you since you wrote it):

“If causal determinism is not true but instead there is another explanation for an event, there is no reason to suppose that a feature of that explanation isn’t that access to other possible worlds is denied too.”

Note that you’re talking about an event, and you’re talking about only one possible world.
A caused event that could not be otherwise is causally deterministic by definition.  Period.

It’s just taking the view that things don’t happen as a result of what went before but there is some other reason that they happen.

Unfortunately for you, it also matches the definition of causal determinism.

But I’ve got to give you due credit for avoiding that issue very successfully in your answer.  :grin:

Of course if things do happen as a result of what happened before, be that due to statistical probability generated by what happened before or absolute certainty, either of these rule out free will.

That claim doesn’t follow unless you identify probability as an ontological cause.  That’s not going to happen via any other means than your bald assertion.

which ever way you look at it, it seems you must be wrong.

Of course you might be right but as you can’t see how and I don’t believe it, we’ve gone as far as we can go with this.

Pretty funny how you’re blind to the fact that you’re doing precisely what you accuse me of doing (except that you’re the only one guilty of it).

“It’s just taking the view that things don’t happen as a result of what went before but there is some other reason that they happen.”

If you take the view then you assure that you can never have any reasonable evidence in favor of causation.  Ergo, no reason to claim that causal determinism is true.

I can put that one in a deductive syllogism for you, also.
In the meantime you can amuse us by offering alternative ways in which a necessary universe could contain an action minus causal determinism.  With explanations, since you seem to think that an argument lacking those is worthless.

Profile
 
 
Posted: 30 November 2007 08:52 AM   [ Ignore ]   [ # 302 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3349
Joined  2007-11-21
StephenLawrence - 29 November 2007 02:09 PM
Bryan - 29 November 2007 01:54 PM

We just can’t think of a way that I could avoid stabbing someone which is not luck as far as I’m concerned in this case.

We can’t?

I can’t and you can’t, therefore we can’t.

Stephen

Your second premise is questionable.  It appears to be based on an appeal to incredulity.

[ Edited: 30 November 2007 09:42 PM by Bryan ]
Profile
 
 
Posted: 30 November 2007 09:14 AM   [ Ignore ]   [ # 303 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5939
Joined  2006-12-20
Bryan - 30 November 2007 08:52 AM
StephenLawrence - 29 November 2007 02:09 PM
Bryan - 29 November 2007 01:54 PM

We just can’t think of a way that I could avoid stabbing someone which is not luck as far as I’m concerned in this case.

We can’t?

I can’t and you can’t, therefore we can’t.

Stephen

You second premise is questionable.  It appears to be based on an appeal to incredulity.

You guessed wrong,

My second premise is based on the belief that I think you are in circumstances in which if you could you would!

And then post it to prove it.

Therefore I don’t believe you can do it, unless you show me you can do it!

Stephen

Profile
 
 
Posted: 30 November 2007 09:37 AM   [ Ignore ]   [ # 304 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3349
Joined  2007-11-21
StephenLawrence - 30 November 2007 09:14 AM
Bryan - 30 November 2007 08:52 AM
StephenLawrence - 29 November 2007 02:09 PM
Bryan - 29 November 2007 01:54 PM

We just can’t think of a way that I could avoid stabbing someone which is not luck as far as I’m concerned in this case.

We can’t?

I can’t and you can’t, therefore we can’t.

Stephen

You second premise is questionable.  It appears to be based on an appeal to incredulity.

You guessed wrong,

You only think I guessed wrong.  That’s what’s so funny!

My second premise is based on the belief that I think you are in circumstances in which if you could you would!

I have but you don’t believe it, thus appeal to incredulity.

And then post it to prove it.

Let me spell it out for you:  You didn’t stab him 50% of the time because there was a 50% probability of stabbing him.  The probability that you would stab him 50% of the time stems from the number of times you stabbed him out of 10,000.

You’re turning descriptive probability illicitly into some kind of ontological cause of your actions.  You’re making an error in reasoning.
http://www.centerforinquiry.net/forums/viewreply/28801/

You don’t address the argument.  You simply ignore it and insist that I haven’t made my point.

Therefore I don’t believe you can do it, unless you show me you can do it!

Then show that the proof I’ve already offered doesn’t work, lest somebody like a Stephen assumes that your failure to address it means that you cannot address it.

Profile
 
 
Posted: 30 November 2007 10:49 AM   [ Ignore ]   [ # 305 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5939
Joined  2006-12-20
Bryan - 30 November 2007 09:37 AM

Let me spell it out for you:  You didn’t stab him 50% of the time because there was a 50% probability of stabbing him.  The probability that you would stab him 50% of the time stems from the number of times you stabbed him out of 10,000.

You’re turning descriptive probability illicitly into some kind of ontological cause of your actions.  You’re making an error in reasoning.
http://www.centerforinquiry.net/forums/viewreply/28801/

You don’t address the argument.  You simply ignore it and insist that I haven’t made my point.

True and that is because you haven’t!

Bryan it just doesn’t matter if I’m making an error in reasoning with my argument or not, you may well be right.

My point is you are not supplying an argument of your own.

I’m convinced the reason is that you can’t and you can tell me I’m committing 27 fallacies and 16 errors in reasoning but none of it will make the slightest bit of difference, unless you supply some substance to your argument.

You have no idea how we could do otherwise in the way you claim i.e you wouldn’t know where to start if asked to make a machine which could do this.

And you have no idea how if you achieved it that it could possibly add more than a luck factor.

And you are in denial if you think such a choice making machine wouldn’t be debilitated.

It is completely obvious that such a chess computer would choose bad moves every now and then and lose. You got around this by only allowing it to do otherwise very rarely but that’s daft. We have free will if we could do otherwise but not often enough for us to fall apart. LOL

Actually there is now a draught (checkers) playing machine which can only ever win or draw now. If we added an element of probability it would lose a little or a lot depending on how often this element of probability played a part.

You know that adding could do otherwise between the evaluation process and the action would make us out of control so you shove it in there a bit earlier instead to avoid that problem. But still you have absolutely no reason to think it could give us responsibility.

What ever the rights and wrongs of my arguments, what is indisputable is that you don’t have one.

So I shall play no further part in this farcical conversation.

Stephen

[ Edited: 30 November 2007 11:38 AM by StephenLawrence ]
Profile
 
 
Posted: 30 November 2007 12:40 PM   [ Ignore ]   [ # 306 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3349
Joined  2007-11-21
StephenLawrence - 30 November 2007 10:49 AM
Bryan - 30 November 2007 09:37 AM

Let me spell it out for you:  You didn’t stab him 50% of the time because there was a 50% probability of stabbing him.  The probability that you would stab him 50% of the time stems from the number of times you stabbed him out of 10,000.

You’re turning descriptive probability illicitly into some kind of ontological cause of your actions.  You’re making an error in reasoning.
http://www.centerforinquiry.net/forums/viewreply/28801/

You don’t address the argument.  You simply ignore it and insist that I haven’t made my point.

True and that is because you haven’t!

Bryan it just doesn’t matter if I’m making an error in reasoning with my argument or not, you may well be right.

My point is you are not supplying an argument of your own.

I established control in a libertarian libertarian free will model
I established the ability to do otherwise in a libertarian free will model
I established deep moral responsibility according to Strawson’s definition.

That meets each of the criteria for libertarian free will except “ultimate responsibility,” which ends up being a sham.

I’m convinced the reason is that you can’t and you can tell me I’m committing 27 fallacies and 16 errors in reasoning but none of it will make the slightest bit of difference, unless you supply some substance to your argument.

My argument has all the substance it needs, since you haven’t had an answer for any of the three prongs of the argument.  You either rebut or accept.  Doing neither places you in no position to assert that my argument has failed, except in terms of convincing you, which—for all we know—is impossible regardless of the evidence.

You have no idea how we could do otherwise in the way you claim i.e you wouldn’t know where to start if asked to make a machine which could do this.

Likewise, you have no idea how we could do something necessarily if causal determinism isn’t true—but for some reason that doesn’t stop you from saying it.  Your position is hypocritical, Stephen, and that isn’t even its greatest weakness.

The fact is that I do have an idea as to how it could be done, but since I have a good idea that you’ll only accept a causal explanation as an explanation I took the tack of showing your hypocrisy instead of trying to jump through what will inevitably be an absurd hoop.

And you have no idea how if you achieved it that it could possibly add more than a luck factor.

And you failed to establish luck as an ontological factor in determining outcomes.  Each of your objections met this type of fate, yet you continue to object.

And you are in denial if you think such a choice making machine wouldn’t be debilitated.

Would I be in denial if I doubted you had the capability of establishing your claim with evidence and logic?

It is completely obvious that such a chess computer would choose bad moves every now and then and lose.

How is that worse than having a chess computer doomed to pick the same bad move every time?

You got around this by only allowing it to do otherwise very rarely but that’s daft. We have free will if we could do otherwise but not often enough for us to fall apart. LOL

... because the appeal to ridicule is suddenly a logical argument?

Actually there is now a draught (checkers) playing machine which can only ever win or draw now. If we added an element of probability it would lose a little or a lot depending on how often this element of probability played a part.

Ah, so the computer was lucky enough to be programmed well.  Like winning the lottery, eh?

You know that adding could do otherwise between the evaluation process and the action would make us out of control so you shove it in there a bit earlier instead to avoid that problem. But still you have absolutely no reason to think it could give us responsibility.

Other than the fact that I established DMR according to Strawson’s criterion (yet another argument you’ve ignored)?

What ever the rights and wrongs of my arguments, what is indisputable is that you don’t have one.

So I shall play no further part in this farcical conversation.

If you make good on your promise then how will you react to the following?

1. we could do otherwise

2. That 1. could make us utimately responsible for our actions.

The proof of number 1 is elementary.  All that is needed is the concession that causal determinism is not necessarily true.  If Stephen can’t make that concession then he will inevitably beg the question regarding the existence of libertarian free will.  His own presuppositions will prevent him from accepting it.  He doesn’t have to believe it exists.  He just needs to allow that it’s possible so that we can construct a possible world in which it manifests itself.

Causal determinism, by definition, is the state of affairs whereby the state of the world at A necessarily results in the state of affairs at subsequent moment B.  Once the existence of libertarian free will is established (for the sake of argument), the first criterion is met (the argument is not whether or not we have libertarian free will but whether or not it meets the above criteria).

P1 If a you have mental state Q that was determined by your will, which was not in turn causally determined by preceding states then you have ultimate responsibility for who you are (at least in some mental respects).
P2 DMR* is having ultimate responsibility for the way you are (at least in certain mental respects).

C If you have have mental state Q that was determined by your will, which was not in turn causally determined by preceding states then you have DMR

*Strawson:  So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are—at least in certain mental respects
http://www.naturalism.org/strawson_interview.htm

Probably Stephen’s sticking-point is that “certain mental respects” in his mind, and perhaps Strawson’s as well are the initial mental respects regardless of how those initial states influence subsequent mental states.

Here are two models stemming from initial mental states.  One for determinism and one for the alternative.

A=>B=>C=>D=>E=>X

A=>B(1)=>C(1)=>D(1)=>X(1)
____________________=>X(2)
________=>C(2)=>D(2)=>X(3)
____________________=>X(4)
A=>B(2)=>C(3)=>D(3)=>X(5)
____________________=>X(6)
________=>C(4)=>D(4)=>X(7)
_____________________=>X(8)

A does not change from top to bottom because it is the given starting mental state.  I’ve arbitrarily limited the LFW model to two possible/accessible choices at each juncture just to keep it simple.

In the first model, it is fair to say that initial state A is “ultimately responsible” for subsequent state B.  If we wish, we can add that state A is ultimately responsible for state C, for state C was in turn bound to occur given state A.  We can use the same reasoning right on up through X, since given causal determinism X was bound to happen following A.

But what about the other model?  Can we say that A is “ultimately responsible” for any of the subsequent states from B(1) through X(8)?

Picture each of the mental states as billiard balls.  You strike the cue ball on trajectory A, which should cause it to strike B (given causal determinism).  But instead of striking B, the ball changes trajectory and strikes B(2).  Where is the ultimate responsibility for result B(2)?

If we take as given that causal determinism would cause the striking of billiard ball B, we have a problem accounting for ultimate responsibility.  A scientist would tend to look at the billiard ball to see what had caused it to upset expectations.  But the scientist works under the assumption of causal determinism so he can’t answer our question without dropping his assumption.  But the scientist’s first move is correct.  Under the assumptions of LFW, only the ball can be ultimately responsible for the action because no preceding state (or combination thereof) provides an explanation for the outcome. 

Of course, Stephen did make clear that he doesn’t believe in “ultimate responsibility” at all even if causal determinism is true (after all, what caused A?).  That only prompts us to wonder what role he sees for “ultimate responsibility”—why should we consider his definition of the term a necessary prerequisite for LFW?

That argument appears to rest on his unfounded assertion that chance acts as some form of ontological entity determining outcomes, as though without putting himself to the test 10,000 times Stephen could have predicted how many times he would stab someone given 10,000 opportunities.

Stephen admits that his reasoning could be wrong, but refuses to accept the consequences for his error.  And, in fact, he appears to have taken up the ball with the intent of going home with it.

The “ultimate responsibility” argument Strawson and Stephen use is a smokescreen.  Who cares what your mental state was with A if A need not result in B?  Moral responsibility occurs when the being starting at mental state A knowingly chooses its course after that point where more than one option remains accessible.

Just as you look to the cue ball for responsibility when it strikes B(2), so you look to the free moral agent for responsibility when it takes one accessible option over another.  If you start looking for what caused the ball to strike B(2) instead of B(1) beyond the cue ball itself then you’re operating under the assumption of causal determinism.  And even then you’ll never find an ultimate cause as defined by Stephen.

In the end, the “ultimate responsibility” thing has nothing to do with an awareness of choices ending in an uncaused cause, which is what a LFW advocate would have had in mind.  It turns out being the challenge to explain LFW minutely in terms of causal determinism.

Profile
 
 
   
21 of 21
21
 
‹‹ Cultural Relativism      The Totality ››