11 of 13
11
Cognitive Computer Chips
Posted: 18 September 2011 09:38 AM   [ Ignore ]   [ # 151 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1822
Joined  2007-10-28
domokato - 17 September 2011 04:47 PM

I am telling you from my education in computer science that the brain has the same computational abilities as a computer, no more, no less. There is something called “turing completeness”. The brain is turing complete. Same with a computer. This means they are “turing equivalent”. This means given enough time and memory, a computer can compute anything a brain can.

Not true. Turing completeness does not relate at all to your contention that “the brain has the same computational abilities as a computer, no more, no less”. That is the false analogy. A brain is neither a computer nor a Turing machine.

From the wiki on Turing completeness

In computability theory, a system of data-manipulation rules (such as an instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if and only if it can be used to simulate any single-taped Turing machine and thus in principle any computer.

You wrote:

I rejected your argument as insufficient because it in no way distinguishes humans from robots, because “a <robot’s computer> is also connected to a <robot’s> body and through it, interacts with its environment.” That’s not an analogy. That’s looking at what you said about humans and saying that a robot is the same way. Literally.

It is an analogy and a false one at that. Also I did not write “about humans and saying that a robot is the same way. Literally.” (whatever it means?)

Since humans and human brains are not designed/constructed by humans, it follows that they are not machines and cannot be conceived/characterized as such.
=> Since X is not designed/constructed by humans, it follows that X is not a machine and cannot be conceived/characterized as such.
=> Since <this alien machine> is not designed/constructed by humans, it follows that <this alien machine> is not a machine and cannot be conceived/characterized as such.

But if X is an alien machine, it is clearly a machine. So, plugging it into my sentence just like that is absurd, misleading and meaningless.

The state of the art in AI advances every year. We now have reliable speech-to-text systems, improving computer vision systems, improving natural language processing, improving machine learning algorithms, etc., and Japanese robotics is even trying to recreate emotion in robots, with impressive results. I wonder, if a day comes when a robot will be able to converse with you, sympathize with you, and claim it is conscious, will you still deny it?

That will be the day. Why not try the Go challenge first?

 Signature 

I am, therefore I think.

Profile
 
 
Posted: 18 September 2011 05:07 PM   [ Ignore ]   [ # 152 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
kkwan - 18 September 2011 09:38 AM
domokato - 17 September 2011 04:47 PM

I am telling you from my education in computer science that the brain has the same computational abilities as a computer, no more, no less. There is something called “turing completeness”. The brain is turing complete. Same with a computer. This means they are “turing equivalent”. This means given enough time and memory, a computer can compute anything a brain can.

Not true. Turing completeness does not relate at all to your contention that “the brain has the same computational abilities as a computer, no more, no less”.

Of course it does. If a brain is only turing complete then a computer can do anything a brain can. The only thing more powerful than turing completeness is hypercomputation, and there is nothing in the known universe that is capable of this. Therefore, your argument that a computer can’t do what a brain can do is false.

A brain is neither a computer nor a Turing machine.

The brain doesn’t have to be a computer or a turing machine to be turing complete.

Maybe this will help: http://www.cs.unm.edu/~saia/computability.html

A relevant quote:
Now modern computers can be simulated on a Turing machine and neural networks and biological models can be simulated on modern computers. Further, various complicated proofs have shown that neural networks and some biological models can simulate a Turing machine. Hence we know that neural networks and biological models(e.g. genetic algorithms) are also Turing complete.

You wrote:

I rejected your argument as insufficient because it in no way distinguishes humans from robots, because “a <robot’s computer> is also connected to a <robot’s> body and through it, interacts with its environment.” That’s not an analogy. That’s looking at what you said about humans and saying that a robot is the same way. Literally.

It is an analogy and a false one at that. Also I did not write “about humans and saying that a robot is the same way. Literally.” (whatever it means?)

Let me add a comma: “That’s looking at what you said about humans, and saying that a robot is the same way. Literally.” In other words, you said humans have these attributes, and that’s what sets them apart from machines. But, machines have those same attributes, so your conclusion does not follow.

Since humans and human brains are not designed/constructed by humans, it follows that they are not machines and cannot be conceived/characterized as such.
=> Since X is not designed/constructed by humans, it follows that X is not a machine and cannot be conceived/characterized as such.
=> Since <this alien machine> is not designed/constructed by humans, it follows that <this alien machine> is not a machine and cannot be conceived/characterized as such.

But if X is an alien machine, it is clearly a machine. So, plugging it into my sentence just like that is absurd, misleading and meaningless.

Replace <this alien machine> with <this device that was made by aliens> then.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 18 September 2011 08:34 PM   [ Ignore ]   [ # 153 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5976
Joined  2009-02-26

IMO, computers have the same computational powers s the human brain.
The difference lies in the incredible compactness of the human “brain” and its abilitty to process data from its environment in an abstract symbolic, emotional way, i.e. being “moved” to tears by the “beauty” of a song or a scene.

This ability seems to be inherent in all living organisms to a greater or lesser degree. I am sure the answer lies in the refinement of the sensory nervous system. Can this refinement be achieved in computers?
IMO computers can be much more sensitive that humans for certain specific tasks and on a micro scale. Chemical analysis, mapping DNA, etc.
But the human brain can “appreciate” and “love” (release of endorphins).  Could a computer ever reach that kind of sophistication of mind/thought ?

[ Edited: 18 September 2011 08:38 PM by Write4U ]
 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 18 September 2011 11:17 PM   [ Ignore ]   [ # 154 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5976
Joined  2009-02-26

There is amino acid synthesis, photo synthesis and now there is synthetic synthesis. Where will it end?

http://www.sciencedaily.com/releases/2010/06/100630110908.htm

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 19 September 2011 09:12 PM   [ Ignore ]   [ # 155 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1822
Joined  2007-10-28
domokato - 18 September 2011 05:07 PM

Of course it does. If a brain is only turing complete then a computer can do anything a brain can. The only thing more powerful than turing completeness is hypercomputation, and there is nothing in the known universe that is capable of this. Therefore, your argument that a computer can’t do what a brain can do is false.

“If a brain is only turing complete ” is a big IF. But, is it so? Where is the evidence?

The brain doesn’t have to be a computer or a turing machine to be turing complete.

Quite so, but again, is the brain Turing complete?

From the wiki on Turing completeness

In practice Turing completeness, named after Alan Turing, means that the rules followed in sequence on arbitrary data can produce the result of any calculation. This requires, at a minimum, conditional branching (an “if” and “goto” statement) and the ability to change arbitrary memory locations (formality requires an explicit HALT state).

And the wiki on the Halting problem

The Halting problem

In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

In other words, if there is a halting problem, it cannot be Turing complete. Now, computers are designed to be Turing complete with an explicit HALT state, but is the brain the same? Not necessarily so.

In nature, there are many halting problems. For instance, fractals.

The computer is not, but the brain is fractal. Why is it so?

You quoted the following:

Now modern computers can be simulated on a Turing machine and neural networks and biological models can be simulated on modern computers. Further, various complicated proofs have shown that neural networks and some biological models can simulate a Turing machine. Hence we know that neural networks and biological models(e.g. genetic algorithms) are also Turing complete.

This is all theoretical. In practice:

The crucial point is, can “neural networks” or “some biological models” possibly ever fully simulate the brain in its totality, with its prodigious complexity of 100 billion neurons and 566 billion glial cells and the possible astronomical interactions between them? Has it ever been demonstrated to be so? If not, then the brain can neither be assumed nor be considered as a Turing machine or be Turing complete at all.

Another issue is, the brain is massively parallel, but is analog whereas a computer can be massively parallel, but is digital. Also, there is no specific CPU, memory or program per se in the brain. These fundamental differences allow the brain to quickly analyze a situation with many variables and arrive at an approximate solution whereas the computer must compute all the possible outcomes before it can decide on the best solution.

This is exemplified in the game Go. Take the most powerful supercomputer available now and pit it against an expert human Go player. How about it, if computers are so advanced and that they are similar to human brains? cheese 

Replace <this alien machine> with <this device that was made by aliens> then.

“This device that was made by aliens” is an alien machine.  LOL

 Signature 

I am, therefore I think.

Profile
 
 
Posted: 20 September 2011 06:52 AM   [ Ignore ]   [ # 156 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
Write4U - 18 September 2011 08:34 PM

IMO, computers have the same computational powers s the human brain.

Really? Quick, what’s (3503403/32.432 + 874.309383* 73254)^1/3 ???  wink

The difference lies in the incredible compactness of the human “brain” and its abilitty to process data from its environment in an abstract symbolic, emotional way, i.e. being “moved” to tears by the “beauty” of a song or a scene.

That… is one helluva difference.

This ability seems to be inherent in all living organisms to a greater or lesser degree. I am sure the answer lies in the refinement of the sensory nervous system. Can this refinement be achieved in computers?

Probably, but not in Kurzweil’s (or my) lifetime. BTW, it is much more than sensory refinement. As you say, chemical analysis, etc is “easy” with instrumentation.

IMO computers can be much more sensitive that humans for certain specific tasks and on a micro scale. Chemical analysis, mapping DNA, etc.
But the human brain can “appreciate” and “love” (release of endorphins).  Could a computer ever reach that kind of sophistication of mind/thought ?

Color me skeptical.

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 20 September 2011 10:21 AM   [ Ignore ]   [ # 157 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
kkwan - 19 September 2011 09:12 PM
domokato - 18 September 2011 05:07 PM

Of course it does. If a brain is only turing complete then a computer can do anything a brain can. The only thing more powerful than turing completeness is hypercomputation, and there is nothing in the known universe that is capable of this. Therefore, your argument that a computer can’t do what a brain can do is false.

“If a brain is only turing complete ” is a big IF. But, is it so? Where is the evidence?

Hypercomputation is not known to be possible and our brain has not been observed to exhibit it. If it did, we would be able to solve much harder problems than we currently can. Here are some models for hypercomputers that clearly outperform a brain (often dealing with infinite precision or an infinite number of calculations in a finite time), and are also not possible: http://en.wikipedia.org/wiki/Hypercomputation#Hypercomputer_proposals

Here is a paper on the subject: http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf . Although it does not address the brain, it should at least prove to you that hypercomputation is not possible in any way known to man.

The brain doesn’t have to be a computer or a turing machine to be turing complete.

Quite so, but again, is the brain Turing complete?

You’re not saying it’s less than turing complete, are you?

From the wiki on Turing completeness

In practice Turing completeness, named after Alan Turing, means that the rules followed in sequence on arbitrary data can produce the result of any calculation. This requires, at a minimum, conditional branching (an “if” and “goto” statement) and the ability to change arbitrary memory locations (formality requires an explicit HALT state).

Yes, the brain can simulate these functions, at least. If not, then it is less powerful than a computer…

And the wiki on the Halting problem

The Halting problem

In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

In other words, if there is a halting problem, it cannot be Turing complete.

I’m glad you are doing some research, kkwan. Unfortunately, you have some misconceptions. The halting problem is deciding whether or not a program will halt. This can be undecidable for some programs. The halting problem has no relation to turing completeness. However, given a hypercomputer, the halting problem can be decided for all programs. The brain cannot solve the halting problem for all programs, so there is some evidence right there that it is not a hypercomputer (although it should be obvious enough that it is not given that it cannot compute an infinite amount of things in a finite amount of time).

Now, computers are designed to be Turing complete with an explicit HALT state, but is the brain the same? Not necessarily so.

I’m not sure halting is required for turing completeness. The wiki says, “(formality requires an explicit HALT state)”. As in, it’s only a formality. Besides, it sounds like you’re arguing that a brain is less than turing complete, when you should be arguing that it is more.

In nature, there are many halting problems. For instance, fractals.

Fractals have nothing to do with the halting problem. In nature, fractals are not infinitely detailed anyway. They only are in math.

The computer is not, but the brain is fractal. Why is it so?

Is the brain fractal? (Not that it matters for this argument)

You quoted the following:

Now modern computers can be simulated on a Turing machine and neural networks and biological models can be simulated on modern computers. Further, various complicated proofs have shown that neural networks and some biological models can simulate a Turing machine. Hence we know that neural networks and biological models(e.g. genetic algorithms) are also Turing complete.

This is all theoretical. In practice:

The crucial point is, can “neural networks” or “some biological models” possibly ever fully simulate the brain in its totality, with its prodigious complexity of 100 billion neurons and 566 billion glial cells and the possible astronomical interactions between them? Has it ever been demonstrated to be so?

According to the laws of physics, yes, it is possible. It’s all computation, and it’s not infinite, so it’s possible.

If not, then the brain can neither be assumed nor be considered as a Turing machine or be Turing complete at all.

Well, if the brain can simulate a turing machine (which you can do even at the level of conscious thought), then that is enough to know it is turing complete. Besides, you should be arguing that it is capable of hypercomputation, not that it is not turing complete.

Another issue is, the brain is massively parallel, but is analog whereas a computer can be massively parallel, but is digital.

Modern CPUs are also parallel, and becoming more parallel each year (but this is irrelevant to turing equivalence since parallel processing does not achieve hypercomputation). A brain is analogue, but the laws of physics (quantum mechanics in particular) preclude infinite precision even in analogue systems. If there was infinite precision (i.e. access to numbers in the real numbers domain), then hypercomputation would be possible. Since it is not, the distinction between analogue and digital is not relevant for the question of turing equivalence between computers and brains. [http://en.wikipedia.org/wiki/Real_numbers#In_physics]

Also, there is no specific CPU, memory or program per se in the brain. These fundamental differences allow the brain to quickly analyze a situation with many variables and arrive at an approximate solution whereas the computer must compute all the possible outcomes before it can decide on the best solution.

Not true. Even in chess engines (like Deep Blue), the engine does not need to compute all possibilities to decide on a best solution. It doesn’t bother computing solutions that are not worth the time. This has no bearing on turing equivalence either way. It does not put brains on an unreachable pedestal.

This is exemplified in the game Go. Take the most powerful supercomputer available now and pit it against an expert human Go player. How about it, if computers are so advanced and that they are similar to human brains? cheese 

Why do you keep bringing this up? You know it is only a matter of time until a Go champion is beaten by a computer, just like Jeopardy before it, and Chess, and Checkers before that.

Replace <this alien machine> with <this device that was made by aliens> then.

“This device that was made by aliens” is an alien machine.  LOL

I am astounded that you do not see the contradiction you are making, and that is the only reason I am still responding to this. Here:

Since <this device that was made by aliens> is not designed/constructed by humans, it follows that <this device that was made by aliens> is not a machine and cannot be conceived/characterized as such.

Presumably you don’t stand by (the logic of) this statement anymore because you just said that “This device that was made by aliens” is an alien machine.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 20 September 2011 11:19 AM   [ Ignore ]   [ # 158 ]
Jr. Member
RankRankRankRank
Total Posts:  89
Joined  2011-09-20

Quick, what’s (3503403/32.432 + 874.309383* 73254)^1/3

Computers are designed and built to do that sort of problem - that is what FPUs are for.  Whatever they have in common, brains and computers are very different in what they do well.  A computer can do a calculation like the above more or less ‘out of the box’ in a couple of microseconds, but something a brain finds trivial to do - say, telling whether a shape is round or square - takes a heck of a lot more work.  We (by which I mean I) know very little about how brains actually work.  What we know is that ‘computers’ can do some of the things a brain can do, purely because computers are remarkable flexible.  We can program a computer to play chess, but how closely a chess program emulates the way a chess master actually thinks is debatable…. not very would be my guess.

If we want to program a computer to do something - arithmetic, play chess or distinguish squares from circles we do not say to much attention to how brains do it… because we don’t know how brains do it!  What we do is construct an algorithm, using the primitive operations we have built into our computer hardware that achieve the desired result.  A computer shape discriminator might be very efficient, but almost certainly it would not work in the same way the brain does the same job.

The real issue of this thread is cognition, or consciousness - can a computer be conscious, or more specifically subjectively conscious.  As things stand today, no one knows how to ‘algothimatise’ consciousness.  Part of the problem is that we aren’t even sure what consciousness is, or what would constitute consciousness in a computer, but another - possibly deeper problem - is the worry that consciousness cannot be turned into an algorithm, full stop.

I am not a dualist - I believe that brains produce consciousness as a result of ‘normal physics’, but I feel the lack of an ‘conciousness algorithm’ is the elephant in the room.  I am aware of the complexity of human brains - the vast number of neurones and the even vaster number of their interconnections and people have argued that consciousness is an emergent property deriving from that complexity, but that seems more like a statement of faith than a scientific theory!  There is no theory that even approaches being able to answer how much complexity is required to produce consciousness.  Other ‘theories’ depend on such things as feedback and self-reference, but it is clear that not all systems with feedback or that are self-referential display consciousness, so such ideas are at best incomplete, and quite possibly just blind-alleys. 

The question of whether a brain is Turing complete or not seems like a red-herring to me.  I don’t think the fact that a brain supports consciousness and an IBM PC doesn’t depends on the notion of Turing completeness.  It would be nice if someone could show how consciousness or cognition can arise in a Turing machine - we would have a chance of producing a genuine ‘artificial consciousness’ (not the same as ‘artificial intelligence!).  The invention or discovery of such an algorithm would go a long way to dispelling the notion that dualism is necessary for consciousness.  But I think that as things stand dualism cannot be ruled out.  I find that vety unsatisfactory, but what is the alternative, other than materialist dogmatism?

Profile
 
 
Posted: 20 September 2011 11:53 AM   [ Ignore ]   [ # 159 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Welcome, Keith.

keithprosser2 - 20 September 2011 11:19 AM

We can program a computer to play chess, but how closely a chess program emulates the way a chess master actually thinks is debatable…. not very would be my guess.

Actually, now that I think about it, they do seem to be rather similar. Both computers and humans use heuristics to decide which lines of play to examine. Then they use material and positional advantage scores to decide on the best choice out of those, and thus which move to make on the current turn. Humans are probably less precise than computers at the latter step. The difference is, at the low level, humans use neurons as their computational unit. But at the high level, they are similar processes.

The question of whether a brain is Turing complete or not seems like a red-herring to me.  I don’t think the fact that a brain supports consciousness and an IBM PC doesn’t depends on the notion of Turing completeness.

I think consciousness arises from physics, making it computable. So it’s not a problem for me.

It would be nice if someone could show how consciousness or cognition can arise in a Turing machine - we would have a chance of producing a genuine ‘artificial consciousness’ (not the same as ‘artificial intelligence!).

There are some ideas: http://en.wikipedia.org/wiki/Artificial_consciousness#Consciousness_in_digital_computers

The invention or discovery of such an algorithm would go a long way to dispelling the notion that dualism is necessary for consciousness.  But I think that as things stand dualism cannot be ruled out.  I find that vety unsatisfactory, but what is the alternative, other than materialist dogmatism?

You mean, positivism and postpositivism in particular? If I’m not mistaken, many of us here are postpositivists.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 20 September 2011 03:10 PM   [ Ignore ]   [ # 160 ]
Jr. Member
RankRankRankRank
Total Posts:  89
Joined  2011-09-20

Actually, now that I think about it, they do seem to be rather similar.

I am a fairly competent computer programmer but a lousy chess player, so I won’t comment on that too much!  It might have been better if I had used shape discrimination as my example.  Would you say a computer and a human discriminate in a rather similar way?  I know how I would program a computer to tell if a bitmap represented a shape that was more round than square (but it would take a lot of code and it would require a lot of steps to do it), but my brain can do it just like that - however I tell a square from a circle, it does not seem be similar to how a computer would be programmed to do it.

But that is incidental.

I think consciousness arises from physics, making it computable. So it’s not a problem for me.

The problem is that there is no good reason to think that!  I too think that consciousness probably (almost certainly) arises from materialistic (ie non-dualistic) principles.  Unfortunately, I am unaware of any way for it to work.  There is no known mechanism or algorithm to achieve consciousness, no matter how much computation time or complexity is thrown at the problem.  Don’t you think that you’re being a little complacent when you say it’s not a problem (for you)?

Profile
 
 
Posted: 20 September 2011 03:11 PM   [ Ignore ]   [ # 161 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5976
Joined  2009-02-26
domokato - 20 September 2011 11:53 AM

Welcome, Keith.

I think consciousness arises from physics, making it computable. So it’s not a problem for me.


Wow…..............., IMO, that is the crux of the matter. But then, if human consciousness arose from and still is a part of a universal binary system,  where is the difference with a computer, also a binary system?
Perhaps it is a function or result of “entanglement”.

edit: yes, welcome Keith.

[ Edited: 20 September 2011 03:17 PM by Write4U ]
 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 20 September 2011 03:34 PM   [ Ignore ]   [ # 162 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5976
Joined  2009-02-26
keithprosser2 - 20 September 2011 03:10 PM

Actually, now that I think about it, they do seem to be rather similar.

I am a fairly competent computer programmer but a lousy chess player, so I won’t comment on that too much!  It might have been better if I had used shape discrimination as my example.  Would you say a computer and a human discriminate in a rather similar way?  I know how I would program a computer to tell if a bitmap represented a shape that was more round than square (but it would take a lot of code and it would require a lot of steps to do it), but my brain can do it just like that - however I tell a square from a circle, it does not seem be similar to how a computer would be programmed to do it.

But that is incidental.

I am not a programmer at all, so please forgive,

Why is it difficult for a computer to recognize shapes? Why can we not program symbols as we do in humans when we “learn” the shape of a circle?

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 20 September 2011 03:40 PM   [ Ignore ]   [ # 163 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
keithprosser2 - 20 September 2011 03:10 PM

Actually, now that I think about it, they do seem to be rather similar.

I am a fairly competent computer programmer but a lousy chess player, so I won’t comment on that too much!  It might have been better if I had used shape discrimination as my example.  Would you say a computer and a human discriminate in a rather similar way?  I know how I would program a computer to tell if a bitmap represented a shape that was more round than square (but it would take a lot of code and it would require a lot of steps to do it), but my brain can do it just like that - however I tell a square from a circle, it does not seem be similar to how a computer would be programmed to do it.

Well, if you use an artificial neural network (which are good at those very types of problems), then the mechanism would be quite similar as well, except it would be expressed in software instead of having physical neuronal connections. For ANNs, the simulated neurons don’t even have to be that sophisticated to solve this problem. I think perceptrons will do.

I think consciousness arises from physics, making it computable. So it’s not a problem for me.

The problem is that there is no good reason to think that!

Oh, but there is good reason! Everything else in the universe, when we examine it, turns out to be physical. There’s no way to logically prove that this will always remain true, but this type of reasoning (inductive) has served us very well.

I too think that consciousness probably (almost certainly) arises from materialistic (ie non-dualistic) principles.  Unfortunately, I am unaware of any way for it to work.  There is no known mechanism or algorithm to achieve consciousness, no matter how much computation time or complexity is thrown at the problem.

Consciousness seems magical, but try this thought experiment: observe your own consciousness and see what are its inputs and what are its outputs. Could you theoretically write a program that has the same functionality? If so, that program would claim it was conscious, yet it would just be executing code and manipulating data. You might call its consciousness an “illusion”, but your own consciousness would be an illusion in the same sense.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 20 September 2011 03:44 PM   [ Ignore ]   [ # 164 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
Write4U - 20 September 2011 03:34 PM

I am not a programmer at all, so please forgive,

Why is it difficult for a computer to recognize shapes? Why can we not program symbols as we do in humans when we “learn” the shape of a circle?

In a “clean” environment, it is not difficult to recognize a shape. The post office uses handwriting recognition software to read addresses (on a clean envelope). But in the real world, circles are on a “noisy” background and often surrounded by artifacts that make it difficult to “see” the shape.

As is obvious in the success of facial recognition software, many of these problems are being overcome. Today, it’s not really that difficult to recognize shapes via software if you know what you’re looking for.

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 20 September 2011 03:55 PM   [ Ignore ]   [ # 165 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
Write4U - 20 September 2011 03:34 PM

I am not a programmer at all, so please forgive,

Why is it difficult for a computer to recognize shapes? Why can we not program symbols as we do in humans when we “learn” the shape of a circle?

There’s actually a lot of computation that goes into it. Your brain uses probably many millions of neurons to figure out whether a shape is a circle or a square. It also has to be trained (during childhood). An artificial neural network also has to be trained in a similar manner, then it gives quick answers like brains do. It seems easy for you since the entire process has become unconscious and quick.

I suppose you might be able to solve the problem using geometry and math, but then it wouldn’t work well for noisy input data.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
   
11 of 13
11