3 of 7
3
Kurzweil Responds: Don’t Underestimate the Singularity
Posted: 07 March 2012 11:37 AM   [ Ignore ]   [ # 31 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
psikeyhackr - 06 March 2012 06:50 PM
domokato - 06 March 2012 02:47 PM

Annnnd…..what does it mean to understand something?

And how are brains not also just reacting to stimulus according to their programming (aka neuronal wiring)?

Does the definition of “shoe” in a dictionary say that a shoe is a 3-dimensional object?  Does it say that shoes wear out?

Does a computer that does not know that shoes wear out understand shoes?

shoe (sh) n.
1. A durable covering for the human foot, made of leather or similar material with a rigid sole and heel, usually extending no higher than the ankle.
2. A horseshoe.
3. A part or device that is located at the base of something or that functions as a protective covering, as:
a. A strip of metal fitted onto the bottom of a sled runner.
b. The base for the supports of the superstructure of a bridge.
c. The ferrule on the end of a cane.
d. The casing of a pneumatic tire.
4. A device that retards or stops the motion of an object, as the part of a brake that presses against the wheel or drum.
5. The sliding contact plate on an electric train or streetcar that conducts electricity from the third rail.
6. A chute, as for conveying grain from a hopper.
7. Games A case from which playing cards are dealt one at a time.
8. shoes Informal
a. Position; status: You would understand my decision if you put yourself in my shoes.
b. Plight: I wouldn’t want to be in her shoes.

That definition provides no info about all of the different types and colors of shoes so how would a computer understand them without vision?  And even with vision how would it understand the problems of shoes that don’t fit well?  There are so many aspects to understanding.

I suppose the word durable might imply that they wear out if the INTELLIGENT ENTITY can consider that there might be a limit to durability

So if a computer program can use a camera to look at a shoe, then determine its durability on a scale of 1 to 10, would you say it understands something about the durability of shoes? This kind of analysis is not outside the realm of even current computer capabilities.

but understanding occurs in levels.  AS soon as you understand one level of a subject that usually raises another level of questions to be answered.  So the entity must decide where to stop among all of the things to be interested in.

So an entity must be able to decide what to be interested in in order to be considered able to understand things? That seems like an overly broad definition. But a computer is still in principle capable of allocating attention. After all, just think of how our brains do it. It gets some visual data, for example, processes it, then unconsciously decides what is most interesting in the scene and turns the eyes there, without you even thinking about it. That’s all automatic and algorithmic, just like a computer. In fact, I can write pseudocode for it:

visualData getVisualData()
pointsOfInterest[] findPointsOfInterest(visualData)
mostInterestingPoint findMostInterestingPoint(pointsOfInterest)
moveEyes(mostInterestingPoint

The tricky part is finding points of interest, but computer vision is definitely capable of that right now.

Deciding where to allocate attention for the purpose of learning would be relatively similar to this algorithm. The hard part is in the details, but it is all theoretically possible.

[ Edited: 07 March 2012 11:51 AM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 02:50 PM   [ Ignore ]   [ # 32 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3333
Joined  2011-11-04
psikeyhackr - 06 March 2012 06:50 PM

...Annnnd…..what does it mean to understand something?
That definition provides no info about all of the different types and colors of shoes so how would a computer understand them without vision?  And even with vision how would it understand the problems of shoes that don’t fit well?  There are so many aspects to understanding…

psik

I think you are on the right track.  I think, in order to synthesize human understanding, a computer must be able to percieve and to integrate perception across some number of various perceptual domains, e.g., visual, tactile, proprioceptive, vestibular, auditory, olfactory…

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 07 March 2012 03:03 PM   [ Ignore ]   [ # 33 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3333
Joined  2011-11-04

As far as coming up with an operational definition of what “understanding” is, that should not be too difficult if it is defined in behavioral terms.  Clearly, as already pointed out in another post, saying “I understand”  would not be a reliable behavior that would denote what most of us think of as “understanding” something.  But, perhaps, a novel descriptive statement about something percieved or a novel explanation about something percieved could be examples of behavior that denotes “understanding”.

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 07 March 2012 03:28 PM   [ Ignore ]   [ # 34 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Does deep blue understand chess? I would say it doesn’t understand the history of it, nor the cultural significance of it, but it definitely understands how to play it. Same with Watson and Jeopardy. If you look at how the best human grandmasters think and how deep blue thinks, you will find they are relatively similar. Both look lines of moves ahead, and both use some method to figure out which lines are most worth examining, then they use some other method to finally pick a move. Which one understands chess better? Probably whoever wins, which right now is computers.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 03:49 PM   [ Ignore ]   [ # 35 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3333
Joined  2011-11-04
domokato - 07 March 2012 03:28 PM

Does deep blue understand chess? I would say it doesn’t understand the history of it, nor the cultural significance of it, but it definitely understands how to play it. Same with Watson and Jeopardy. If you look at how the best human grandmasters think and how deep blue thinks, you will find they are relatively similar. Both look lines of moves ahead, and both use some method to figure out which lines are most worth examining, then they use some other method to finally pick a move. Which one understands chess better? Probably whoever wins, which right now is computers.

I don’t know, but is this still just an advanced simulation of understanding, unless the algorythms are such that novel learning is taking place?  I am thinking that to get artificial intelligence (or understanding) a computer will need to be capable of learning.  And in terms of the way organisms learn, sensory information must be taken in, processed, and then some sort of behavior must occur to demonstrate that something was learned.

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 07 March 2012 07:26 PM   [ Ignore ]   [ # 36 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
TimB - 07 March 2012 03:49 PM
domokato - 07 March 2012 03:28 PM

Does deep blue understand chess? I would say it doesn’t understand the history of it, nor the cultural significance of it, but it definitely understands how to play it. Same with Watson and Jeopardy. If you look at how the best human grandmasters think and how deep blue thinks, you will find they are relatively similar. Both look lines of moves ahead, and both use some method to figure out which lines are most worth examining, then they use some other method to finally pick a move. Which one understands chess better? Probably whoever wins, which right now is computers.

I don’t know, but is this still just an advanced simulation of understanding, unless the algorythms are such that novel learning is taking place?  I am thinking that to get artificial intelligence (or understanding) a computer will need to be capable of learning.

This was a fallacy in the infancy of the field of AI to think that AI is any one thing or that a line can be drawn between not-AI and AI, or AI and conventional intelligence for that matter. Every step towards AI was met with an equal moving of the goalposts (LINK). This will not stop until an AI can match a human in every domain possible, and even then they will say “well, the computer takes up too much space” or “it isn’t as fast as a human” or “it can’t raise a family”, ad infinitum. I think it is better to look at AI as a lot of different problem domains (computer vision, knowledge representation, robot control, machine learning, evolutionary computation, voice recognition, etc.) wherein progress is being made continually in each.

And in terms of the way organisms learn, sensory information must be taken in, processed, and then some sort of behavior must occur to demonstrate that something was learned.

Computers can learn already, but their domains are narrow right now. Netflix learns what kinds of movies you like. So does YouTube. Advertisers learn what products you will be most interested in and show those to you automatically. Watson learns by example and practice. Spam filters learn by experience. Facebook learns which friends you want to interact with most. Google search probably contains a learning algorithm as well.

In this case their sensory information is specific to the domain. For example, Facebook’s “sensory information” includes who’s posts you comment on and like, and who you send messages to, and the resulting “behavior” is giving you certain friends’ posts and not giving you posts from other friends.

Learning is not some mysterious thing that only brains can do. It’s a property of certain systems, like colonies of bacteria or slime molds, but also man-made ones like computer programs.

I get the feeling for a more general AI to be developed there has to be some application for it. We are seeing it in autonomous robotics. But the costs have to come down, the computing power needs to increase, the size needs to come down, and the theory needs to be improved. And we’re making progress in all of these areas every day.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 07:39 PM   [ Ignore ]   [ # 37 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Here’s a taste of the attention allocation algorithm that is currently being developed for OpenCog:

http://wiki.opencog.org/w/Attention_allocation

Attention allocation within OpenCog weights pieces of knowledge relative to one another, based on what has been important to the system in the past and what is currently important.

...

This sections presents how the flow of attention allocation works.

1. Rewarding “useful” atoms:
1.1. Atoms are given stimulus by a MindAgent if they’ve been useful in achieving the MindAgent’s goals.
1.2. This stimulus is then converted into Short and Long Term Importance, by the ImportanceUpdatingAgent.
2. STI is spread between atoms along HebbianLinks, either by the ImportanceDiffusionAgent or the ImportanceSpreadingAgent.
3. The HebbianLinkUpdatingAgent updates the HebbianLink truth values, based on whether linked atoms are in the Attentional Focus or not.
4. The ForgettingAgent removes either atoms that are below a threshold LTI, or above it.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 07:41 PM   [ Ignore ]   [ # 38 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

This AI learns by experience: Beating AI, or not? Post your scores!

So does this one: Akinator .... AI at its best!

[ Edited: 07 March 2012 07:45 PM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 09:22 PM   [ Ignore ]   [ # 39 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2425
Joined  2007-07-05
domokato - 07 March 2012 11:37 AM

So if a computer program can use a camera to look at a shoe, then determine its durability on a scale of 1 to 10, would you say it understands something about the durability of shoes? This kind of analysis is not outside the realm of even current computer capabilities.

When the computer can recognise shoes I will consider the the question.  Until then it is just speculation about this so called singularity that we quite likely won’t see in our lifetimes if ever.

I think using computers to totally revamp our educational system would be a great singularity and would not require artificial intelligence.  I find it really curious that our educators aren’t telling us about this.

http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/rt/printerFriendly/500/421

Netbooks come with 250 to 320 gig drives.  100 gigs is enough space for 100,000 books.  But our educators can’t even suggested something as simple as a National Recommended Reading List even though that was possible decades before we had cheap computers.  Human beings have enough problems with symbols.  I could have read this in high school:

The Tyranny of Words (1938) by Stuart Chase
http://www.youtube.com/watch?v=M9H1StY1nU8

But I never heard of it until a couple of years ago.

psik

[ Edited: 07 March 2012 09:29 PM by psikeyhackr ]
 Signature 

Fiziks is Fundamental

Profile
 
 
Posted: 07 March 2012 09:50 PM   [ Ignore ]   [ # 40 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2715
Joined  2011-04-24
dougsmith - 07 March 2012 07:47 AM
mid atlantic - 06 March 2012 07:43 PM

I have been interested in transhumanism for a few years and in IMO, Kurzweil seems to have an almost religious zeal when describing the singularity; some other transhumanists and scientists have taken issue with his predictions, and I agree with them. That said, it’s still fun to think about.

Yeah, I mean Kurzweil is a smart guy and all, but with this singularity stuff he’s basically a crank. Worth noting that he’s also involved in all kinds of quack self-medication: he claims to take 150 pills a day ...

Yes, all those supplements, and up to 10 glasses of alkaline water? http://www.wired.com/medtech/health/news/2005/02/66585

 Signature 

Raise your glass if you’re wrong…. in all the right ways.

Profile
 
 
Posted: 07 March 2012 09:50 PM   [ Ignore ]   [ # 41 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2715
Joined  2011-04-24
mid atlantic - 07 March 2012 09:50 PM
dougsmith - 07 March 2012 07:47 AM
mid atlantic - 06 March 2012 07:43 PM

I have been interested in transhumanism for a few years and in IMO, Kurzweil seems to have an almost religious zeal when describing the singularity; some other transhumanists and scientists have taken issue with his predictions, and I agree with them. That said, it’s still fun to think about.

Yeah, I mean Kurzweil is a smart guy and all, but with this singularity stuff he’s basically a crank. Worth noting that he’s also involved in all kinds of quack self-medication: he claims to take 150 pills a day ...

  A link to Kurzweil’s longevity products. http://www.rayandterry.com/

[ Edited: 07 March 2012 09:59 PM by mid atlantic ]
 Signature 

Raise your glass if you’re wrong…. in all the right ways.

Profile
 
 
Posted: 07 March 2012 10:00 PM   [ Ignore ]   [ # 42 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  448
Joined  2012-02-02
traveler - 07 March 2012 07:29 AM

That sounds pretty ambitious for a 33 year schedule. If you and Kurzweil are correct, I might just live long enough to see it.
You say organizations will replace corporations. How will these organizations be different from corporations in your opinion? And why do you believe that corporations will meet their demise in just 3 decades?

The reason organizations will replace corporations is somewhat complicated.  3D printing/nanomanufacturing will effectively put an end to corporations, if they haven’t been done for, already.  As I said earlier, why bother paying for a Ferrari if you can get an open source knockoff for free?  Furthermore, by roughly 2020 (possibly sooner), you’ll have PCs with processing power greater than the human brain.

Forget the issue of if they’re sentient or not, they will be able to work like the computers in Star Trek, where someone says, “Computer, design a widget for me.” and the computer promptly does it.  You’ll be able to tell the computer to design you a Ferrari, and it’ll whip the plans up, and send them off to your 3D printer in moments.

There is something else, which neither Kurzweil, nor any other computer expert that I know of, has addressed: Quantum computing.  The big thing about quantum computers is that they will be able to crack the encryption used by non-quantum computers in seconds.  There will come a point when quantum computers are low enough in cost that organized crime groups will be able to afford to buy quantum machines.  How on Earth are we going to be able to have secure financial transactions between individuals at this point in time?  Large organizations will be able to have secure communications and financial transactions because they’ll have quantum computers and two of those can have an encrypted communication which is absolutely uncrackable.

So, you’ve got a situation where people don’t really need corporations, coupled with an inability to protect the assets of individuals, at this point, it seems logical to me that corporations as we know them, will vanish.  Given that humans are social animals and we tend to seek out people who share our interests, the next step will be for people to form organizations where they focus their energies on things which they enjoy.  (Assuming a bunch of lunatics don’t wipe us all out before this with 3D printed nukes.)

Recent news article about the latest developments in medical nanobots.

 Signature 

“There will come a time when it isn’t ‘They’re spying on me through my phone’ anymore. Eventually, it will be ‘My phone is spying on me’.” ― Philip K. Dick

The Atheist in the Trailer Park

Profile
 
 
Posted: 07 March 2012 10:02 PM   [ Ignore ]   [ # 43 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  448
Joined  2012-02-02
dougsmith - 07 March 2012 07:47 AM
mid atlantic - 06 March 2012 07:43 PM

I have been interested in transhumanism for a few years and in IMO, Kurzweil seems to have an almost religious zeal when describing the singularity; some other transhumanists and scientists have taken issue with his predictions, and I agree with them. That said, it’s still fun to think about.

Yeah, I mean Kurzweil is a smart guy and all, but with this singularity stuff he’s basically a crank. Worth noting that he’s also involved in all kinds of quack self-medication: he claims to take 150 pills a day ...

Some of the stuff he’s swallowing, there’s legit science showing that it can help improve one’s health.  I have a hard time believing that he’s cured himself of diabetes like he claims.

 Signature 

“There will come a time when it isn’t ‘They’re spying on me through my phone’ anymore. Eventually, it will be ‘My phone is spying on me’.” ― Philip K. Dick

The Atheist in the Trailer Park

Profile
 
 
Posted: 07 March 2012 10:15 PM   [ Ignore ]   [ # 44 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
psikeyhackr - 07 March 2012 09:22 PM
domokato - 07 March 2012 11:37 AM

So if a computer program can use a camera to look at a shoe, then determine its durability on a scale of 1 to 10, would you say it understands something about the durability of shoes? This kind of analysis is not outside the realm of even current computer capabilities.

When the computer can recognise shoes I will consider the the question.

Object recognition using Kinect on the PC [YouTube]
Teaching Kinect to recognize objects on the PC [YouTube]

Until then it is just speculation about this so called singularity that we quite likely won’t see in our lifetimes if ever.

Personally, I think the singularity is inevitable, but I’m not sold that it will come as soon as Kurzweil thinks.

[ Edited: 07 March 2012 10:21 PM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 March 2012 11:35 PM   [ Ignore ]   [ # 45 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  2715
Joined  2011-04-24
Coldheart Tucker - 07 March 2012 10:02 PM
dougsmith - 07 March 2012 07:47 AM
mid atlantic - 06 March 2012 07:43 PM

I have been interested in transhumanism for a few years and in IMO, Kurzweil seems to have an almost religious zeal when describing the singularity; some other transhumanists and scientists have taken issue with his predictions, and I agree with them. That said, it’s still fun to think about.

Yeah, I mean Kurzweil is a smart guy and all, but with this singularity stuff he’s basically a crank. Worth noting that he’s also involved in all kinds of quack self-medication: he claims to take 150 pills a day ...

Some of the stuff he’s swallowing, there’s legit science showing that it can help improve one’s health.  I have a hard time believing that he’s cured himself of diabetes like he claims.

According to his wiki page http://en.wikipedia.org/wiki/Ray_Kurzweil#Work_on_nutrition.2C_health.2C_and_lifestyle Kurzweil was diagnosed with glucose intolerance, which can be a precursor to type 2 diabetes. However, the fact is that type 2 diabetes can be well managed, and sometimes reversed just by better eating habits and becoming more physically active - no magic alternative therapy is needed.  Essentially, even if he was a type 2 diabetic, he could have “cured” himself with lifestyle changes.

 Signature 

Raise your glass if you’re wrong…. in all the right ways.

Profile
 
 
   
3 of 7
3