”Arguably the burden of proof is on those who claim a given description involves an impossibility.”
Oh, this is a fun game! Now I take a citation that says the opposite:
Because zombies, ex hypothesis, behave just like regular humans, they will claim to be conscious. Thomas argues that any construal of this claim (that is, whether it is taken to be true, false, or neither true nor false) inevitably entails either a contradiction or a manifest absurdity.
Hmmm. Just two problems. First, you were the one who was dismissive of burden of proof issues in the field of philosophy. Your course now is inconsistent with your former course. Second, your citation doesn’t appear to mean what you think it means. Nigel Thomas doesn’t argue that p-zombie advocates ought to bear the burden of proof. He takes up the burden of proof for trying to show that philosophical zombies are absurd. You can’t both argue that p-zombies are absurd and avoid the burden of proof. That’s contradictory. Take your pick.
More to my idea is that it is not an empirical question if p-zombies exist, not even in principle. If I throw a die behind a door that I close immediately after it forever without seeing the die, I cannot know ever which of
1 to 6 is thrown. But I know that the question could be answered in principle. But following from the definition of a p-zombie, that it behaves exactly like a conscious human being (it complains about pain, it has discussions with me about consciousness and the possibility that p-zombies exist), there is even conceptual no way I can really imagine this.
Then don’t go around saying it’s “obvious” that others are conscious if you don’t really know.
when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition
The quotation does you little good if you’re unwilling to take up the burden of proof for the demonstration (showing the alleged violation of the definition).
The reason that I cannot say that consciousness, etc. do not exist is because I have possess knowledge that apparently contradicts that proposition.
Consistently I must say: no idea if you are conscious. I know I am, but you might be a p-zombie… You just p-think you are conscious (if you understand what I mean).
Of course. But I’m not expecting you to take my word for it. I’m simply offering up the logical reasoning that I use. If I perceive myself as conscious then it would be perverse for me not to accept the existence of consciousness, at least for myself.
From a third party view I never can decide if something else has consciousness. Only when I ‘talk’ with it, I notice it. Which is more or less the Türing test. Even intelligent behaviour must not be a proof: chess programs behave quite intelligent. But that is the second party view already. You are taking my idea at the wrong side: for a ‘view’ there is always necessary a ‘subject’, which per definition must be conscious, otherwise there is not even a view.
That’s not a very clear explanation (I can’t tell if you’re describing my interpretation of your third-party view or your own understanding of it).
The conversation exists just as objectively in terms of the exchange of symbols.
Objective symbols? Come on Bryan, they are just pixels.
Substitute “characters” if it helps you avoid equivocation.
If symbols exist, they do so because they can be interpreted. And that is only in the domain of the first and second party view.
And now you’ll explain how the forum does not exist from the third party view (”Even this forum is an illusion”).
It’s very important to note that there may be brain states subsequent to brain state Q but prior to the action. We could either have a brain state that leads indeterministically to a subsequent brain state that in turn causes the desire for action, or we may have one brain state for which the epiphenomenal consciousness varies indeterministically. Either option is consistent with the model.
OK, I think I understand. For the first option my arguments remain valid: brain states ‘correlate’ with ‘actions’, be it via a detour. Epiphenomalism is not a serious option: my actions are my actions because they are caused by my desires and beliefs. Brain states (third party view!) that cause actions do so because they are desires and beliefs on first party view. (And I can report this based on the second party level.)
1) Your talk of a “detour” is not clear. Thinking is not a detour.
2) You need something very close or identical to epiphenomenalism if you’re positing your intentions as non-material causes. The reason is because the intentions require a fully explanatory cause in turn within a deterministic milieu. If you skip that then maybe you’re a libertarian free will advocate and haven’t realized it yet.
If you deny epiphenomenalism then you need some sort of substitute in order to have a workable CFW model featuring the conscious will as a cause.
If the third party has no beliefs then why posit a third party view?
Sigh… I did not say a third party has no beliefs. I said we cannot know that something/body has beliefs if I take the third party view. But I know that perfectly on the second party view.
Again, the explanation is not clear. How do you take a third party view without effectively transforming it into a first-party view? If you mean the third party view as objectivity then consider using that handy term.
It simply recognizes them as unverifiable and recognizes limits on the tools of epistemology.
Of course, I see that too. But epistemology is only relevant in the third party view.
That’s extraordinarily unclear.
Quantum random generators are not theory. (Measure the numbers of ticks in a Geiger-Müller device in a second). They are truly indeterministic. Of course you do not have a quantum random generator in your PC (MAC?). So this is not a real random generator anymore.
I can’t tell if you paid attention to my previous post or not. If you’re going to make these sorts of claims then provide a supporting (non-Wikipedia) citation. Like I did.
I want to add an argument in the same line as the p-zombie argument (...)
We know what the difference is in principle: Self awareness/consciousness. Your argument doesn’t work because of a faulty premise (that there is no difference between a p-zombie and its conscious double). One is conscious and the other isn’t. That’s a difference in principle. The difference doesn’t matter in terms of behavior, but that’s not at all the point of p-zombies in this argument (therefore irrelevant). The point is that you can’t show that conscious desire is a cause. You can assume it for a model of the will, but you can’t know it any more than you can know of other minds.