2 of 4
2
Michael Shermer - The Believing Brain
Posted: 07 June 2011 10:29 AM   [ Ignore ]   [ # 16 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  9284
Joined  2006-08-29
psikeyhackr - 07 June 2011 10:09 AM

They do not comprehend what the bit combinations mean.

Can you load a picture of a herd of cows into Photoshop and have the computer explain what the picture is?

Perhaps not Photoshop CS5, but who knows, maybe Photoshop CS50 will be able to accomplish this. The fact that it might not comprehend what it is doing is irrelevant. But perhaps when the programmers design Photoshop CS50 (with the ability of recognizing a picture of a herd of cows) and turn it on for the first time, we may find (to our surprise and to the computer’s) that the computer now acquires consciousness—or maybe it will happen with CS19 or CS2000. I still believe that consciousness (that is, being aware) is only a byproduct of a very complex calculating machine.

Profile
 
 
Posted: 07 June 2011 10:30 AM   [ Ignore ]   [ # 17 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Processing power is like the size of an office building; the more you have, the more you can do with it.

Intelligence/smartness is like a business model; the better it is, the more effective the company will be.

Both are advancing. More processing power is allowing smarter applications to run on smaller devices. I know many applications are running up against the processing power limit of the device it’s running on (i.e. the intelligence is ahead of the processing power needed to run it). As processing power increases (per unit of size of the device), more intelligent applications can be made immediately: photoshop can handle more undos; speech-to-text and text-to-speech gets closer to real-time (conversation-time); websites load faster. Artificial intelligence is an emergent phenomenon. The pieces will come together slowly.

Another benefactor to AI is science (theory). Cognitive science, neuroscience, and even evolutionary biology. As these fields progress, theories of intelligence and learning will progress, and thus so will its application in AI.

Our brain is a big parallel processing machine. It has evolved very efficient “algorithms” to do what it does (vision processing, sound processing, etc.). Kurzweil predicts that by 2019 a $1000 desktop computer will have as much processing power as a human brain. Whether this is true or not, our algorithms (machine vision, natural language processing, etc.) in 2019 will probably not be as efficient as the human brain’s, so it may take some extra theory to get to human-level intelligence. Although, evolutionary algorithms may be able to take up some of the slack.

/speculation

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 June 2011 10:32 AM   [ Ignore ]   [ # 18 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
psikeyhackr - 07 June 2011 10:09 AM

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Can you load a picture of a herd of cows into Photoshop and have the computer explain what the picture is?  Do you expect 5 year old kid to be able to do that?

Can a 5 year old kid keep track of 100 undos in excruciating detail? Can he draw a photo-realistic lens flare?

Computers are great at some things, humans are great at others. It’s a mistake to think that human intelligence is the only kind

[ Edited: 07 June 2011 10:35 AM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 June 2011 10:41 AM   [ Ignore ]   [ # 19 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15305
Joined  2006-02-14
George - 07 June 2011 10:22 AM
dougsmith - 07 June 2011 09:58 AM
George - 07 June 2011 09:39 AM

So I take it that in your opinion there is a principled distinction to be made between processing power and intelligence (?).

There isn’t “when processing power is being put to tasks that involve learning, memory and reason”, as I said before. Raw processing power isn’t the same as intelligence. Processing power used to learn, remember and reason is intelligence.

Are you saying that computers are not intelligent (or not easily made to be intelligent) because they don’t (easily) learn, remember and reason?

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 07 June 2011 10:44 AM   [ Ignore ]   [ # 20 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15305
Joined  2006-02-14
domokato - 07 June 2011 10:32 AM

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), ...

It’s true that neurons can be modeled as either firing or not firing, but seen on the micro-level neurons behave more like small factories than small switches. They can behave in quite complex ways and so are poorly modeled by semiconductors.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 07 June 2011 10:49 AM   [ Ignore ]   [ # 21 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Yes, I know, but that doesn’t invalidate my overall point that they do not individually comprehend anything yet they form a mind together. Plus neurons can be simulated in computers (at least as far as our understanding of them goes), so even if our theory of intelligence/learning does not progress while our processing power does, eventually we will be able to simulate a whole brain and reach human-level AI that way.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 June 2011 11:01 AM   [ Ignore ]   [ # 22 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  9284
Joined  2006-08-29
dougsmith - 07 June 2011 10:41 AM

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

Profile
 
 
Posted: 07 June 2011 11:05 AM   [ Ignore ]   [ # 23 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15305
Joined  2006-02-14
domokato - 07 June 2011 10:49 AM

Yes, I know, but that doesn’t invalidate my overall point that they do not individually comprehend anything yet they form a mind together. Plus neurons can be simulated in computers (at least as far as our understanding of them goes), so even if our theory of intelligence/learning does not progress while our processing power does, eventually we will be able to simulate a whole brain and reach human-level AI that way.

Right, although I’m not sure we’ll do it by simulating a brain. It could be that we’ll just end up designing something that acts the way a person does while what goes on inside is quite different.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 07 June 2011 11:07 AM   [ Ignore ]   [ # 24 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15305
Joined  2006-02-14
George - 07 June 2011 11:01 AM
dougsmith - 07 June 2011 10:41 AM

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

It’s trivial for a computer to remember. The problem is to remember selectively those things that are important to the task, and make use of them only, while discarding the useless stuff. In other words, the problem is using memory to learn.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 07 June 2011 11:13 AM   [ Ignore ]   [ # 25 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15305
Joined  2006-02-14

domokato, I probably shouldn’t have started my last post by saying “right”. I think it’s still very much an empirical question as to whether any feasibly constructable computing device could really simulate a human brain. It might be, for instance, that the chemical complexity and chaotic interactions between sub-neural elements are so vast that it is impossible to simulate one adequately. (We might have to know the initial conditions too precisely, or there might be too many interacting chemical elements to compute in a feasible amount of time).

But that doesn’t imply that we couldn’t construct something just as intelligent, or more intelligent, by doing something more like classic AI.

These are all empirical questions and I don’t think we can know the outcome beforehand.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 07 June 2011 11:19 AM   [ Ignore ]   [ # 26 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

I’m not sure our simulation needs to be that accurate to get the desired effect (learning - current artificial neural networks are capable of some degree of it), but you’re right, we can’t know until we know.

[ Edited: 07 June 2011 11:31 AM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 June 2011 12:08 PM   [ Ignore ]   [ # 27 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  9284
Joined  2006-08-29
dougsmith - 07 June 2011 11:07 AM
George - 07 June 2011 11:01 AM
dougsmith - 07 June 2011 10:41 AM

Well, what I’m saying is that it’s difficult to make computers intelligent (as opposed to giving them raw processing power) because it’s difficult to design systems that learn, remember and reason. Raw processing power is a comparatively easy problem to solve.

Is Photoshop’s undo a raw processing operation or a matter of remembering? I know computer doesn’t remember the same way we do (how could it?) but in my opinion it remembers nevertheless.

It’s trivial for a computer to remember. The problem is to remember selectively those things that are important to the task, and make use of them only, while discarding the useless stuff. In other words, the problem is using memory to learn.

Again, the difference here is that Photoshop is not designed to remember to learn the way we are, but that is only a minor difference between us and not enough of a reason, IMO, to refer to Photoshop’s task of remembering as not intelligent. Chess computers use memory to learn.

[ Edited: 07 June 2011 12:10 PM by George ]
Profile
 
 
Posted: 07 June 2011 01:25 PM   [ Ignore ]   [ # 28 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2291
Joined  2007-07-05
domokato - 07 June 2011 10:32 AM
psikeyhackr - 07 June 2011 10:09 AM

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Our brains do not funnel bits into a central processor creating what is called a von Neumann bottleneck.

You can try comparing a neuron to a flip-flop all you want and assume you can extrapolate from there.  But so far we are just making smaller, faster, and more multi-connected von Neumann machines and approaching the point where the software required to coordinate the processors is choking off the power of the processors.

Have you ever written an assembly language program?

But a believing brain is one accepting something as true or false without understanding to so it is behaving like a stupid computer.

psik

 Signature 

Fiziks is Fundamental

Profile
 
 
Posted: 07 June 2011 01:55 PM   [ Ignore ]   [ # 29 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
psikeyhackr - 07 June 2011 01:25 PM
domokato - 07 June 2011 10:32 AM
psikeyhackr - 07 June 2011 10:09 AM

von Neumann machines manipulate symbols in the form of bit combinations.  They do not comprehend what the bit combinations mean.

Neurons in our brains manipulate ones and zeroes too (firing vs not firing), and they also do not individually comprehend anything, yet as a whole they form a mind.

Our brains do not funnel bits into a central processor creating what is called a von Neumann bottleneck.

You can try comparing a neuron to a flip-flop all you want and assume you can extrapolate from there.  But so far we are just making smaller, faster, and more multi-connected von Neumann machines and approaching the point where the software required to coordinate the processors is choking off the power of the processors.

But what matters is functional equivalence. If you can accurately simulate a brain in a computer then it’s functionally the same thing as a brain. A modern CPUs can perform more calculations per second than a neuron, which is what makes this kind of simulation possible, or at least possible in the near future.

Have you ever written an assembly language program?

Yes, I have a BS in computer science.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 June 2011 05:14 PM   [ Ignore ]   [ # 30 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2291
Joined  2007-07-05
domokato - 07 June 2011 01:55 PM

If you can accurately simulate a brain in a computer then it’s functionally the same thing as a brain.

Tests have been done on people where electrical stimulation was applied to a person’s brain which caused the subject to have some specific recollection.  But how can stimulating the same place in another person’s brain bring up the same memory if they did not have the same experience.

Everybody’s brain would be wired somewhat differently so what kind of simulation can be done and how can the simulation be tested.

Where is the CPU in anybody’s brain?  LOL

The metaphor that people have been insisting on making between computers and human brains for the last 60 years has been nonsense.  The problem is that computers manipulate symbols according to whatever program but understands nothing about what the symbols mean.  We do not know how our understanding corresponds to the signals moving in the brain.  Where is the memory of anybody’s high school graduation stored in their brain.

psik

 Signature 

Fiziks is Fundamental

Profile
 
 
   
2 of 4
2