Let me spell it out for you: You didn’t stab him 50% of the time because there was a 50% probability of stabbing him. The probability that you would stab him 50% of the time stems from the number of times you stabbed him out of 10,000.
You’re turning descriptive probability illicitly into some kind of ontological cause of your actions. You’re making an error in reasoning.
You don’t address the argument. You simply ignore it and insist that I haven’t made my point.
True and that is because you haven’t!
Bryan it just doesn’t matter if I’m making an error in reasoning with my argument or not, you may well be right.
My point is you are not supplying an argument of your own.
I established control in a libertarian libertarian free will model
I established the ability to do otherwise in a libertarian free will model
I established deep moral responsibility according to Strawson’s definition.
That meets each of the criteria for libertarian free will except “ultimate responsibility,” which ends up being a sham.
I’m convinced the reason is that you can’t and you can tell me I’m committing 27 fallacies and 16 errors in reasoning but none of it will make the slightest bit of difference, unless you supply some substance to your argument.
My argument has all the substance it needs, since you haven’t had an answer for any of the three prongs of the argument. You either rebut or accept. Doing neither places you in no position to assert that my argument has failed, except in terms of convincing you, which—for all we know—is impossible regardless of the evidence.
You have no idea how we could do otherwise in the way you claim i.e you wouldn’t know where to start if asked to make a machine which could do this.
Likewise, you have no idea how we could do something necessarily if causal determinism isn’t true—but for some reason that doesn’t stop you from saying it. Your position is hypocritical, Stephen, and that isn’t even its greatest weakness.
The fact is that I do have an idea as to how it could be done, but since I have a good idea that you’ll only accept a causal explanation as an explanation I took the tack of showing your hypocrisy instead of trying to jump through what will inevitably be an absurd hoop.
And you have no idea how if you achieved it that it could possibly add more than a luck factor.
And you failed to establish luck as an ontological factor in determining outcomes. Each of your objections met this type of fate, yet you continue to object.
And you are in denial if you think such a choice making machine wouldn’t be debilitated.
Would I be in denial if I doubted you had the capability of establishing your claim with evidence and logic?
It is completely obvious that such a chess computer would choose bad moves every now and then and lose.
How is that worse than having a chess computer doomed to pick the same bad move every time?
You got around this by only allowing it to do otherwise very rarely but that’s daft. We have free will if we could do otherwise but not often enough for us to fall apart.
... because the appeal to ridicule is suddenly a logical argument?
Actually there is now a draught (checkers) playing machine which can only ever win or draw now. If we added an element of probability it would lose a little or a lot depending on how often this element of probability played a part.
Ah, so the computer was lucky enough to be programmed well. Like winning the lottery, eh?
You know that adding could do otherwise between the evaluation process and the action would make us out of control so you shove it in there a bit earlier instead to avoid that problem. But still you have absolutely no reason to think it could give us responsibility.
Other than the fact that I established DMR according to Strawson’s criterion (yet another argument you’ve ignored)?
What ever the rights and wrongs of my arguments, what is indisputable is that you don’t have one.
So I shall play no further part in this farcical conversation.
If you make good on your promise then how will you react to the following?
1. we could do otherwise
2. That 1. could make us utimately responsible for our actions.
The proof of number 1 is elementary. All that is needed is the concession that causal determinism is not necessarily true. If Stephen can’t make that concession then he will inevitably beg the question regarding the existence of libertarian free will. His own presuppositions will prevent him from accepting it. He doesn’t have to believe it exists. He just needs to allow that it’s possible so that we can construct a possible world in which it manifests itself.
Causal determinism, by definition, is the state of affairs whereby the state of the world at A necessarily results in the state of affairs at subsequent moment B. Once the existence of libertarian free will is established (for the sake of argument), the first criterion is met (the argument is not whether or not we have libertarian free will but whether or not it meets the above criteria).
P1 If a you have mental state Q that was determined by your will, which was not in turn causally determined by preceding states then you have ultimate responsibility for who you are (at least in some mental respects).
P2 DMR* is having ultimate responsibility for the way you are (at least in certain mental respects).
C If you have have mental state Q that was determined by your will, which was not in turn causally determined by preceding states then you have DMR
*Strawson: So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are—at least in certain mental respects
Probably Stephen’s sticking-point is that “certain mental respects” in his mind, and perhaps Strawson’s as well are the initial mental respects regardless of how those initial states influence subsequent mental states.
Here are two models stemming from initial mental states. One for determinism and one for the alternative.
A does not change from top to bottom because it is the given starting mental state. I’ve arbitrarily limited the LFW model to two possible/accessible choices at each juncture just to keep it simple.
In the first model, it is fair to say that initial state A is “ultimately responsible” for subsequent state B. If we wish, we can add that state A is ultimately responsible for state C, for state C was in turn bound to occur given state A. We can use the same reasoning right on up through X, since given causal determinism X was bound to happen following A.
But what about the other model? Can we say that A is “ultimately responsible” for any of the subsequent states from B(1) through X(8)?
Picture each of the mental states as billiard balls. You strike the cue ball on trajectory A, which should cause it to strike B (given causal determinism). But instead of striking B, the ball changes trajectory and strikes B(2). Where is the ultimate responsibility for result B(2)?
If we take as given that causal determinism would cause the striking of billiard ball B, we have a problem accounting for ultimate responsibility. A scientist would tend to look at the billiard ball to see what had caused it to upset expectations. But the scientist works under the assumption of causal determinism so he can’t answer our question without dropping his assumption. But the scientist’s first move is correct. Under the assumptions of LFW, only the ball can be ultimately responsible for the action because no preceding state (or combination thereof) provides an explanation for the outcome.
Of course, Stephen did make clear that he doesn’t believe in “ultimate responsibility” at all even if causal determinism is true (after all, what caused A?). That only prompts us to wonder what role he sees for “ultimate responsibility”—why should we consider his definition of the term a necessary prerequisite for LFW?
That argument appears to rest on his unfounded assertion that chance acts as some form of ontological entity determining outcomes, as though without putting himself to the test 10,000 times Stephen could have predicted how many times he would stab someone given 10,000 opportunities.
Stephen admits that his reasoning could be wrong, but refuses to accept the consequences for his error. And, in fact, he appears to have taken up the ball with the intent of going home with it.
The “ultimate responsibility” argument Strawson and Stephen use is a smokescreen. Who cares what your mental state was with A if A need not result in B? Moral responsibility occurs when the being starting at mental state A knowingly chooses its course after that point where more than one option remains accessible.
Just as you look to the cue ball for responsibility when it strikes B(2), so you look to the free moral agent for responsibility when it takes one accessible option over another. If you start looking for what caused the ball to strike B(2) instead of B(1) beyond the cue ball itself then you’re operating under the assumption of causal determinism. And even then you’ll never find an ultimate cause as defined by Stephen.
In the end, the “ultimate responsibility” thing has nothing to do with an awareness of choices ending in an uncaused cause, which is what a LFW advocate would have had in mind. It turns out being the challenge to explain LFW minutely in terms of causal determinism.