13 of 13
13
Cognitive Computer Chips
Posted: 21 September 2011 09:47 AM   [ Ignore ]   [ # 181 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

I think, like water, when we understand the underlying physics (or in this case neurology) of it, it will become clear what the subjective experience of consciousness really is. It seems to me to be an illusion - an illusion of both existence and experience.

Crazy idea: maybe we only feel that our consciousness is separate/non-physical because it is evolutionarily advantageous (e.g. allows one to justify sacrificing oneself in battle so that one’s kin may live).

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 21 September 2011 09:47 AM   [ Ignore ]   [ # 182 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  4455
Joined  2007-08-31
kkwan - 21 September 2011 09:36 AM

On that basis, something that behaves as if it is conscious is a zombie computer.  LOL

You do pretty well in this Turing test. In this forum I mean. One might even think you are conscious. Only the fact that you use online encyclopedia so much, makes suspicious: I think your programmer must increase your standalone argumentative capabilities. cool smile

 Signature 

GdB

“The light is on, but there is nobody at home”

Profile
 
 
Posted: 21 September 2011 09:57 AM   [ Ignore ]   [ # 183 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  4455
Joined  2007-08-31
domokato - 21 September 2011 09:47 AM

an illusion of both existence and experience.

Wow, an illusion of experience. That sounds really interesting.  wink

domokato - 21 September 2011 09:47 AM

Crazy idea: maybe we only feel that our consciousness is separate/non-physical because it is evolutionarily advantageous (e.g. allows one to justify sacrificing oneself in battle so that one’s kin may live).

Well, consciousness in itself surely is evolutionary advantageous. Probably the ‘single user illusion’ too. But to feel separate… I don’t know.

To give an example: Dennett calls the single user illusion a benign illusion. Blackmore, who for a big part took over Dennett’s idea of the ‘multiple draft’ consciousness, calls it a malign illusion. Understandably for somebody who is practising Buddhist meditation.

 Signature 

GdB

“The light is on, but there is nobody at home”

Profile
 
 
Posted: 21 September 2011 10:17 AM   [ Ignore ]   [ # 184 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
kkwan - 21 September 2011 09:13 AM

I’m glad you are doing some research, kkwan. Unfortunately, you have some misconceptions. The halting problem is deciding whether or not a program will halt. This can be undecidable for some programs. The halting problem has no relation to turing completeness. However, given a hypercomputer, the halting problem can be decided for all programs. The brain cannot solve the halting problem for all programs, so there is some evidence right there that it is not a hypercomputer (although it should be obvious enough that it is not given that it cannot compute an infinite amount of things in a finite amount of time).

A Turing complete machine requires an explicit HALT state. If there is a halting problem, there is no explicit HALT state.

Not quite. Remember, the halting problem is not “a problem of not halting”. The “problem” in the halting problem is deciding whether or not a program will halt. So you have it backwards. There is no halting problem if there is no halt state, because all programs will not halt. Hence, deciding whether a program will halt or not is trivial. There is a halting problem only if there is a halt state, because programs can loop forever, never reaching that halt state, or they can halt. Deciding which will happen is impossible for some programs.

It does matter because computers are not fractal. Thus, the organization of the brain is fundamentally different to that of the computer and as such it cannot be compared to a computer at all.

What is so “fundamental” about fractals? I think the reason organisms use fractals is because fractals are generated from simple algorithms, yet yield complex structures. This means it can be easily encoded in DNA. Of course, the fractal structure must also be adaptive in some way.

Software can have fractal architectures. Recursive functions in programs are pretty damn fractally too.

Whether or not something has a fractal structure has no bearing on turing equivalence. And remember, we are talking about turing equivalence because you said a computer cannot do what a brain does. Turing equivalence means it can.

According to the laws of physics, yes, it is possible. It’s all computation, and it’s not infinite, so it’s possible.

But, the brain is a living biological organism and cannot be purely described by the laws of physics.

Of course it can. All biological organisms can. From bacteria on up. The field of biology is built upon chemistry, which is built upon physics.

Since it is not, the distinction between analogue and digital is not relevant for the question of turing equivalence between computers and brains.

The issue is, analog and digital computers work differently. Each have their strengths and weaknesses. Thus, the analog fractal brain has tremendous advantages over digital computers in situations where computations are astronomical, tedious, time consuming and there could also be a halting problem as well.

Well, the brain has an advantage wherever it has evolved to do well, such as sensory processing and language processing. Computers are good at anything that can be programmed into it. Right now, what we know how to program into it include things like calculating relativistic motion and generating and displaying complex interactive 3D worlds. The computations are “astronomical, tedious, and time consuming” yet computers are much better at them than brains are. You know computers can work with the real number domain (i.e. analog), just not with perfect precision (just like the brain).

Why do you keep bringing this up? You know it is only a matter of time until a Go champion is beaten by a computer, just like Jeopardy before it, and Chess, and Checkers before that.

No, I don’t think it is only just a matter of time.  Why not now, or is it that the most powerful supercomputer available now is not capable to do that at all?

That, and the state of the art in Go engine architecture is not that great, probably. Why did it take all this time for Watson to beat a Jeopardy champion? It was a mix of lack of computing power and lack of theory of how to play the game. Eventually, those advanced to the point where winning was possible.

Would you like to bet on the computer beating a Go master?

Yes, I would grin . Since you think it will never happen, will you take a bet of $10,000 and within our lifetimes?

You wrote:

Since <this device that was made by aliens> is not designed/constructed by humans, it follows that <this device that was made by aliens> is not a machine and cannot be conceived/characterized as such.

It is that simple. “This device that was made by aliens” IS an alien machine. As such, it is just as inane, inappropriate and absurd as putting “an alien machine” into the sentence.

In the context of the sentence I wrote, it was specifically “the brain” and the meaning of the sentence should only be interpreted as such. Putting anything else in place of that is preposterous and a travesty of my intention in composing the sentence.

I give up.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 21 September 2011 06:55 PM   [ Ignore ]   [ # 185 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1884
Joined  2007-10-28
domokato - 21 September 2011 10:17 AM

Not quite. Remember, the halting problem is not “a problem of not halting”. The “problem” in the halting problem is deciding whether or not a program will halt. So you have it backwards. There is no halting problem if there is no halt state, because all programs will not halt. Hence, deciding whether a program will halt or not is trivial. There is a halting problem only if there is a halt state, because programs can loop forever, never reaching that halt state, or they can halt. Deciding which will happen is impossible for some programs.

Quite so. The halting problem is a problem of decidability. From the wiki on the Halting problem

In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

In other words, if a program continues running for quite some time, will it eventually stop running or carry on running for ever?

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

This is the computing equivalent to Godel’s Incompleteness Theorems

The concepts raised by Gödel’s incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar.

Partial solutions?

There are many programs that either return a correct answer to the halting problem or do not return an answer at all. If it were possible to decide whether any given program gives only correct answers, one might hope to collect a large number of such programs and run them in parallel, in the hope of being able to determine whether any programs halt. Curiously, recognizing such partial halting solvers (PHS) is just as hard as the halting problem itself.

You wrote:

What is so “fundamental” about fractals? I think the reason organisms use fractals is because fractals are generated from simple algorithms, yet yield complex structures. This means it can be easily encoded in DNA. Of course, the fractal structure must also be adaptive in some way.

Why are natural objects and brains fractal? It is the way of nature and the universe. And why are human created/designed machines like computers, not fractal? They are unnatural objects.  grin

Software can have fractal architectures. Recursive functions in programs are pretty damn fractally too.

Yes, but not the architecture of the hardware. That is a fundamental problem of all computers.

Of course it can. All biological organisms can. From bacteria on up. The field of biology is built upon chemistry, which is built upon physics.

Many biologists would not agree. This is extreme reductionism.

Well, the brain has an advantage wherever it has evolved to do well, such as sensory processing and language processing. Computers are good at anything that can be programmed into it. Right now, what we know how to program into it include things like calculating relativistic motion and generating and displaying complex interactive 3D worlds. The computations are “astronomical, tedious, and time consuming” yet computers are much better at them than brains are. You know computers can work with the real number domain (i.e. analog), just not with perfect precision (just like the brain).

The brain is fractal, analog, quick to respond and that is an advantage in survival. It is not necessary to compute to the Nth degree as digital computers do. In fact, that would be disastrous for survival in the natural world.

That, and the state of the art in Go engine architecture is not that great, probably. Why did it take all this time for Watson to beat a Jeopardy champion? It was a mix of lack of computing power and lack of theory of how to play the game. Eventually, those advanced to the point where winning was possible.

Then, do it eventually. It should be facinating.

Yes, I would grin . Since you think it will never happen, will you take a bet of $10,000 and within our lifetimes?

I don’t have 10,000 shekels. An acknowledgment at the appropriate time, will do.

I give up.

smile

 Signature 

I am, therefore I think.

Profile
 
 
Posted: 21 September 2011 07:02 PM   [ Ignore ]   [ # 186 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1884
Joined  2007-10-28
GdB - 21 September 2011 09:47 AM

You do pretty well in this Turing test. In this forum I mean. One might even think you are conscious. Only the fact that you use online encyclopedia so much, makes suspicious: I think your programmer must increase your standalone argumentative capabilities. cool smile

Hmm….GdB, that’s preposterous. 

Am I a semi-conscious computer or what? 

Programmer’s note:

kkwan has been upgraded and enhanced with the following modules:

1. Standalone argument imperative version 2.075879

2. Non quoting or minimal quoting imperative version 3.856672

LOL

[ Edited: 21 September 2011 07:52 PM by kkwan ]
 Signature 

I am, therefore I think.

Profile
 
 
Posted: 21 September 2011 08:29 PM   [ Ignore ]   [ # 187 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
kkwan - 21 September 2011 06:55 PM

Why are natural objects and brains fractal? It is the way of nature and the universe.

Eh, what a meaningless thing to say.

And why are human created/designed machines like computers, not fractal? They are unnatural objects.  grin

An example of a man-made object that is “unnatural” and fractal:
http://en.wikipedia.org/wiki/Fractal_antenna

Software can have fractal architectures. Recursive functions in programs are pretty damn fractally too.

Yes, but not the architecture of the hardware. That is a fundamental problem of all computers.

Has no bearing on turing equivalence, therefore it’s not a fundamental problem.

Of course it can. All biological organisms can. From bacteria on up. The field of biology is built upon chemistry, which is built upon physics.

Many biologists would not agree. This is extreme reductionism.

You think biologists appeal to magic? What is there besides physics?

Well, the brain has an advantage wherever it has evolved to do well, such as sensory processing and language processing. Computers are good at anything that can be programmed into it. Right now, what we know how to program into it include things like calculating relativistic motion and generating and displaying complex interactive 3D worlds. The computations are “astronomical, tedious, and time consuming” yet computers are much better at them than brains are. You know computers can work with the real number domain (i.e. analog), just not with perfect precision (just like the brain).

The brain is fractal, analog, quick to respond and that is an advantage in survival. It is not necessary to compute to the Nth degree as digital computers do. In fact, that would be disastrous for survival in the natural world.

Meh. I’ve already responded to these points.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 23 September 2011 09:08 AM   [ Ignore ]   [ # 188 ]
Jr. Member
RankRankRankRank
Total Posts:  89
Joined  2011-09-20

deciding whether a program will halt or not is trivial.

Unless its running under Windows ME in which case halting is guaranteed!

I think the issue of why/how consciousness evolved is interesting.  This will be my longest post so far, so bear with me!

Imagine a unicellular critter I will call a ‘bouncer’.  When a bouncer encounters an object a chemical is released from its the point of contact on its cell wall setting up a concentration gradient across its single cell that causes its flagella to move in a way that makes it move away from that object.  From that simple beginning we can imagine bouncers evolving to use two different chemicals for different types of object it collides with, so some objects it might bounce off, others it might tend to hug by causing a different pattern of flagella action. 

A chemical gradient set up by such a collision can be viewed as encoding information - “there is an object at relative position X”.  The chemical gradient is an intermediate ‘information channel’ between the stimulus (a collision with the cell wall) and the response (flagella movement).  The point of that fable is to show that it is very easy to start up a process of evolving a system that supports a ‘model’ of the external world.  The bouncers model of the world is very simple and crude - it only represents a small aspect of the critters immediate environment but we can imagine evolution building on it to produce a system that taken in all sorts of sensory data, transcribes it into some intermediate information-bearing form and the uses the information so encoded to control behaviour.  We can also imagine the evolution of process that operate on the intermediate form iteself, seeking correlations or patterns in the data stored there. 

So - I want to suggest - one aspect of consciousness - the support of an internalised model of the external world - could evolve quite easily.  But I wouldn’t suggest that a bouncer with its simple, collision-induced chemical gradients had any actual ‘awareness’, leave alone ‘self-awareness’ which is the really interesting aspect of consciousness.  Before considering that, consider what and how the external world is represented or encoded in an information intermedate, whether that information is encoded (or ‘reprsented’) as a chemical gradient or a pattern of synaptic activity.  It would be nice if the fact that there is a predator hiding behind a rock was represented, but it can’t be.  What gets represented are not all the facts about the world, but a subset of those facts.  For example, a representation might contain the information that a rock is hard, but not that it made of atoms.  Some of the information encoded might actually be wrong, but as long as the wrong information is useful that is ok as far as evolution is concerned.  So if we compare what is represented with what is real, we will find the representation is incomplete and occasionally even erroneous. 

The representation system evolves for the purpose of ensuring critters find food and mates, not to solve epistemological riddles so it will emphasise what is important for survival.  Of all the things such a representation must represent, surely the most important is the critter itself.  This means the representational system must evolve to represent not only the outside world but itself within itself - which may or may not be the dawning of self-awareness, but I think its not a totally daft idea.

Profile
 
 
Posted: 24 September 2011 04:30 AM   [ Ignore ]   [ # 189 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  4455
Joined  2007-08-31
keithprosser2 - 23 September 2011 09:08 AM

So - I want to suggest - one aspect of consciousness - the support of an internalised model of the external world - could evolve quite easily.  But I wouldn’t suggest that a bouncer with its simple, collision-induced chemical gradients had any actual ‘awareness’, leave alone ‘self-awareness’ which is the really interesting aspect of consciousness. 

I don’t know if this follows that easy. Take a thermostat: would you say it has an internal representation of the temperature in the room? Even if the mechanism is more complicated, like your bouncer, I still would not speak of a representation.

Allow me another example: a loud bang occurs next to an organism. One organism goes away immediately; another, of exactly the same kind, stays where it is, it is just shaking a little. What is going on? Well, both are human: the first is a soldier in a war, the other one is somebody enjoying fireworks and is laughing. So the difference in their reaction can be explained by an internal difference, in this case their completely different representations, and their view of their role in the represented situation.

So in my opinion the evolutionary advantage is the possibility to anticipate external circumstances, and act according the representation of these circumstances, and on its own role in it. My tentative view is that when a system, biological or electronic, is able to do this, it is conscious. What I am not quite sure about is if it also needs the capability to communicate about these inner states. In a certain way, the turing test of course suggests that this capability is the only one needed. And maybe this is correct: without having these inner states, it might be very difficult to communicate as if one has such states, but not really have them. Another argument why the idea of p-zombies is absurd. And also an argument that consciousness is real, that it really plays a causal role, and still can be completely understood from the low level neural processes.

 Signature 

GdB

“The light is on, but there is nobody at home”

Profile
 
 
Posted: 24 September 2011 07:35 AM   [ Ignore ]   [ # 190 ]
Jr. Member
RankRankRankRank
Total Posts:  89
Joined  2011-09-20

My pet bouncer was intended to illustrate how a simple intermediary between stimulus and response could arise, just to give evolution a starting point.  The chemical gradient does ‘encode information’ but you could examine it and work out at which point on the bouncers surface the collision occurred… the chemical does encode much information, but it does at least encode some information.  Whether you’d call that a ‘representation’ seems like a semantic issue… but I think it could be enough of a start for evolution to produce information-bearing systems of ever more and more sophistication that would definitely count as representations. 

Re thermostats, I think the bend of its bi-metalic strip would correspond only to the chemical gradient in a bouncer.  Thermostats may well no less conscious than my bouncers, but I am not claiming bouncers are conscious.  Rather I want to point out that an essential element of how consciousness works (at least the sort of consciousness we are familiar with) is that involves a informational model and it is the information in that model that consciousness is of.  We are not conscious of the world directly - we are conscious of what ever it is that gets put into a neural model of the world in our brains.  That is not sufficient for consciousness - you don’t only need a model of the world you also need a way to have ‘awareness’ of what IS encoded in that model, of what it represents… I would say that neither a bouncer nor a thermostat has the required level of awareness to consider them conscious entities, but bouncers are more likely to evolve something close to consciousness than a thermostat!

I think soldiers and firework fanciers are bit high on the evolutionary scale for now, so I’d say it was safe to say that the most primitive things we could call a brain takes in a limited number of sensory signals and uses them to trigger a limited number of rigidly fixed responses.  I’d guess that insect brains work just like that.  In somethine like a bouncer (and even more a thermostat) you can imagine writing down the exact chain of causality from sensory input to ‘intermediate representation’ to ‘action’ in almost perfect detail.  With an insect writing down a full description of the causality (I’d guess) be a bit harder, but still possible in principle (if all it is doing is mapping sensory input to action as I have suggested).  As insects are ancient and numerous such a simple system obviously works well!  But insects aren’t very good at novel thinking.  A fly will continue bumping against a pane of glass until it is dead.  Switching between a fixed and limited repertoire of responses is obviously not ideal. 

But switching between behaviours can be done unconsciously - that is without having to have consciousness.  It’s not really any different from changing from a forward to a reverse gear.  But if you want to evolve the ability to do more than just switch between behaviours, what do you do have to do?  It’s seems to me that one of things that nature did is evolve consciousness as a way of being able to process more information more effectively.  There might be other ways to achieve improvement but nature opted to evolve consciousness. (I suppose nature could have gone along the route of more sophisticated but still unconscious behaviour switching thus producing not conscious entities but p-zombies for example).

I have no doubt that consciousness is a way to improve our evolutionary fitness, even if it not the only way.  If we assume the brain developed a mechanism for extracing the information out of its interal representation (which it must certainly have done!) then when the self started to be internally (self)represented along with external phenomena then that same mechanism would - perhaps - give ‘self-awareness’. 

Of course all that is speculation, and not very well informed speculation at that, but I don’t know if there is anything other than speculation that can be done with this topic!

Profile
 
 
Posted: 24 September 2011 11:38 AM   [ Ignore ]   [ # 191 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Interesting ideas Keith, I tend to agree.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
   
13 of 13
13