1 of 13
1
Cognitive Computer Chips
Posted: 07 September 2011 12:42 PM   [ Ignore ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6160
Joined  2009-02-26

Thinking and learning computers are in the works!

http://www.msnbc.msn.com/id/21134540/vp/44396744#44396744

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 07 September 2011 01:12 PM   [ Ignore ]   [ # 1 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07

One small step for HAL. Gazzillions to go…

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 07 September 2011 03:17 PM   [ Ignore ]   [ # 2 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

well, that was really light on the details

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 07 September 2011 03:42 PM   [ Ignore ]   [ # 3 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6160
Joined  2009-02-26
domokato - 07 September 2011 03:17 PM

well, that was really light on the details

True, but there is now a name, “Cognitive Computer Chip”.  Search showed quite a few links.

The Future Holds:
In the long term, IBM wants to build a system that has 10 billion neurons and 100 trillion synapses (as many synapses and a tenth as many neurons as the human brain), uses just one kilowatt of power, and can fit in a shoebox.
Ultimately, Modha told Popular Science, cognitive computers would be able to combine lots of inputs and make sense of them, the way the human brain does: taking into account the firmness, color, and odor of a piece of produce, say, to tell whether it’s ripe or rotten.
Brain-inspired computers would be a complement to, rather than a replacement for, today’s systems, Modha told Wired.com:

Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.
The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.

http://blogs.discovermagazine.com/80beats/2011/08/19/a-brainy-new-chip-could-make-computers-more-like-humans/

[ Edited: 07 September 2011 03:53 PM by Write4U ]
 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 08 September 2011 06:13 AM   [ Ignore ]   [ # 4 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
domokato - 07 September 2011 03:17 PM

well, that was really light on the details

LOL Wasn’t it though.

Computer intelligence is a very complicated topic, mainly because of the nebulous definitions of intelligence. Computers can appear very intelligent as they are capable of controlling every aspect of complicated tasks such as flying complex aircraft. But in reality, such a task is accomplished not by intelligence but rather through a set of sophisticated control systems. Such systems should be considered to be no more than a tool that is used by an intelligent entity.

Emotion is more a part of what makes us appear intelligent than we like to admit. The recent lineup of Republican candidates appear intelligent to many people because they match up with an emotional bias. But as has been shown, they lack any deep knowledge of civics or science. Intelligence is often more a matter of what impresses us, no matter its true value. So sure, computers are very intelligent.

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 08 September 2011 09:34 AM   [ Ignore ]   [ # 5 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
traveler - 08 September 2011 06:13 AM

Computer intelligence is a very complicated topic, mainly because of the nebulous definitions of intelligence. Computers can appear very intelligent as they are capable of controlling every aspect of complicated tasks such as flying complex aircraft. But in reality, such a task is accomplished not by intelligence but rather through a set of sophisticated control systems. Such systems should be considered to be no more than a tool that is used by an intelligent entity.

http://en.wikipedia.org/wiki/AI_effect

Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, ‘that’s not thinking’.”[1] AI researcher Rodney Brooks complains “Every time we figure out a piece of it, it stops being magical; we say, Oh, that’s just a computation.”[2]

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 08 September 2011 10:14 AM   [ Ignore ]   [ # 6 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15435
Joined  2006-02-14
domokato - 08 September 2011 09:34 AM
traveler - 08 September 2011 06:13 AM

Computer intelligence is a very complicated topic, mainly because of the nebulous definitions of intelligence. Computers can appear very intelligent as they are capable of controlling every aspect of complicated tasks such as flying complex aircraft. But in reality, such a task is accomplished not by intelligence but rather through a set of sophisticated control systems. Such systems should be considered to be no more than a tool that is used by an intelligent entity.

http://en.wikipedia.org/wiki/AI_effect

Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, ‘that’s not thinking’.”[1] AI researcher Rodney Brooks complains “Every time we figure out a piece of it, it stops being magical; we say, Oh, that’s just a computation.”[2]

This is a profound realization. Think of all the amazing things we can do with and on computers: videos, music, podcasts, games, email, etc. It’s all basically adding, subtracting and moving around ones and zeros. If you think of manipulating a bunch of ones and zeros you’ll never see how they could produce the profusion of stuff we see around us on the web. But they do. Similarly, all of logic and math is basically “if x then y; x; y”, plus the null set and all the sets that contain it. All the rest can be constructed from those beginnings.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 08 September 2011 01:47 PM   [ Ignore ]   [ # 7 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
domokato - 08 September 2011 09:34 AM
traveler - 08 September 2011 06:13 AM

Computer intelligence is a very complicated topic, mainly because of the nebulous definitions of intelligence. Computers can appear very intelligent as they are capable of controlling every aspect of complicated tasks such as flying complex aircraft. But in reality, such a task is accomplished not by intelligence but rather through a set of sophisticated control systems. Such systems should be considered to be no more than a tool that is used by an intelligent entity.

http://en.wikipedia.org/wiki/AI_effect

Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, ‘that’s not thinking’.”[1] AI researcher Rodney Brooks complains “Every time we figure out a piece of it, it stops being magical; we say, Oh, that’s just a computation.”[2]

Yes, but the chorus of critics is correct (or are correct, if you’re British smile ). Of course the chorus is pointing to general intelligence (strong AI), not the specific intelligence (applied AI) demonstrated by checkers/chess software. So it’s an apples/oranges debate to some extent. Again, it comes down to what impresses you as intelligence. The combination of computers with other technologies has advanced the state of the art tremendously, but to compare any of it to the human mind is just silly IMO. We’re making progress, but we are far from the asymptote.

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 08 September 2011 02:22 PM   [ Ignore ]   [ # 8 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6160
Joined  2009-02-26

IMO, until we are able to program “intuition” (i.e. abstract symbolic thinking) into a computer it basically remains a purely logical (mathematical) machine.

I believe what sets the brain apart from a computer is a genetic “memory” of survival skills and “automatic” responses. i.e. body language, speech inflection, association of colors with the immediate environment.  red=danger, green=abundance, blue=tranquility.

When we can build a computer that has a fundamental ability to think symbolically and “sense” the implication, we shall be on the way to a true AI.

I remember Dr Lecter (silence of the lambs) saying “when presented with a question, first discover its fundamental (symbolic) properties”. As far as I know, this has not yet been attempted. Until then a computer will remain a sophisticated calculator, incapable of inductive thought.

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 08 September 2011 04:30 PM   [ Ignore ]   [ # 9 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
traveler - 08 September 2011 01:47 PM

Again, it comes down to what impresses you as intelligence.

My point is that intelligence will always be “just computation”. Until it starts improving upon itself, we will always be able to understand how an artificial general intelligence works and it may not be very impressive, but it would be intelligent.

IMO, until we are able to program “intuition” (i.e. abstract symbolic thinking) into a computer it basically remains a purely logical (mathematical) machine.

Abstract symbolic thinking is the only thing computers do. They only work with ones and zeros, which are abstract symbols that represent different things depending on the context. I don’t think this is “intuition”. Intuition in humans seems to be that unconscious part of our mind that synthesizes information it knows from experience and creates a “gut feeling” that is then offered up to our conscious mind. Abstract symbolic thought in humans is more like deductive reasoning, which computers are already good at.

I believe what sets the brain apart from a computer is a genetic “memory” of survival skills and “automatic” responses. i.e. body language, speech inflection, association of colors with the immediate environment.  red=danger, green=abundance, blue=tranquility.

What really separates the brain from the computer is that sensorimotor skills seem easy to a person, yet they are the most computationally complex, while high-level reasoning seems hard to a person, yet for a computer it is relatively easy. See http://en.wikipedia.org/wiki/Moravec’s_paradox . These vision-based survival skills you mention first require a computer that has an effective vision processing module!

I remember Dr Lecter (silence of the lambs) saying “when presented with a question, first discover its fundamental (symbolic) properties”. As far as I know, this has not yet been attempted. Until then a computer will remain a sophisticated calculator, incapable of inductive thought.

Sure it’s been attempted, a lot. It just turns out natural language processing is one of those hard problems for AI. Getting a computer to think like we think is hard because computers have a hard time perceiving what we can easily perceive (due to Moravec’s paradox). Computers are perfectly capable of inductive thought (http://en.wikipedia.org/wiki/Machine_learning); what they are not good at is automatically gathering and making sense of the data upon which to induct. It usually has to be fed to them in some standardized way.

For example, the “cognitive computer chips” in the OP seem to be nothing more than hardware-based artificial neural networks, which have existed in software since the early days of computers. Does it really bring us closer to “thinking and learning” computers? Perhaps only in the sense that it makes an artificial neural network that much faster (being implemented in hardware rather than software). There is nothing intrinsically different about it. Computation is computation. Intelligence is computation. More computation means more intelligence. I always see the line between human and computer intelligence being drawn arbitrarily, in such a way to allow us to keep our “special” status. But really, are we so different? I contend intelligence should be defined as a continuum, or a multidimensional space even, not a “you have it or you don’t” type thing. It is not any one thing. It is and always will be a combination of a number of things working together. We humans have the illusion that we are one indivisible unit because we each have a singular consciousness, but our brain is largely a parallel machine, each part doing its own processing to contribute to the whole. Similarly, a strong artificial general intelligence will be an amalgamation of parts that contribute to a whole, each part seemingly unintelligent on its own, but when combined suddenly seems intelligent.

[ Edited: 08 September 2011 04:50 PM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 08 September 2011 05:32 PM   [ Ignore ]   [ # 10 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
domokato - 08 September 2011 04:30 PM
traveler - 08 September 2011 01:47 PM

Again, it comes down to what impresses you as intelligence.

My point is that intelligence will always be “just computation”. Until it starts improving upon itself, we will always be able to understand how an artificial general intelligence works and it may not be very impressive, but it would be intelligent.

OK. We can agree to disagree that this is intelligence.

IMO, until we are able to program “intuition” (i.e. abstract symbolic thinking) into a computer it basically remains a purely logical (mathematical) machine.

Abstract symbolic thinking is the only thing computers do. They only work with ones and zeros, which are abstract symbols that represent different things depending on the context. I don’t think this is “intuition”. Intuition in humans seems to be that unconscious part of our mind that synthesizes information it knows from experience and creates a “gut feeling” that is then offered up to our conscious mind. Abstract symbolic thought in humans is more like deductive reasoning, which computers are already good at.

Computers do not do “abstract symbolic thinking.” They don’t think at all. And I could argue that while ones and zeros are symbols, they are very concrete and there is nothing abstract about them. They are combined to eventually generate machine code; a sequence of binary states that are fed to a FSM (finite state machine). And sure, a FSM can be used to describe a neurological system, but it cannot be used to replace one. Neurological systems are biologic and can in fact alternate between digital and analog modes and perform massively parallel “operations” that dwarf our largest connectionist models (which are arguably reductionist in their approach).

I believe what sets the brain apart from a computer is a genetic “memory” of survival skills and “automatic” responses. i.e. body language, speech inflection, association of colors with the immediate environment.  red=danger, green=abundance, blue=tranquility.

What really separates the brain from the computer is that sensorimotor skills seem easy to a person, yet they are the most computationally complex, while high-level reasoning seems hard to a person, yet for a computer it is relatively easy. See http://en.wikipedia.org/wiki/Moravec’s_paradox . These vision-based survival skills you mention first require a computer that has an effective vision processing module!

That sensorimotor statement depends. Flying a complex aircraft certainly requires sophisticated sensorimotor skills. Computers handle this just fine. Now, tying one’s shoes - that’s a tough one.

I remember Dr Lecter (silence of the lambs) saying “when presented with a question, first discover its fundamental (symbolic) properties”. As far as I know, this has not yet been attempted. Until then a computer will remain a sophisticated calculator, incapable of inductive thought.

Sure it’s been attempted, a lot. It just turns out natural language processing is one of those hard problems for AI. Getting a computer to think like we think is hard because computers have a hard time perceiving what we can easily perceive (due to Moravec’s paradox). Computers are perfectly capable of inductive thought (http://en.wikipedia.org/wiki/Machine_learning); what they are not good at is automatically gathering and making sense of the data upon which to induct. It usually has to be fed to them in some standardized way.

For example, the “cognitive computer chips” in the OP seem to be nothing more than hardware-based artificial neural networks, which have existed in software since the early days of computers. Does it really bring us closer to “thinking and learning” computers? Perhaps only in the sense that it makes an artificial neural network that much faster (being implemented in hardware rather than software). There is nothing intrinsically different about it. Computation is computation. Intelligence is computation. More computation means more intelligence. I always see the line between human and computer intelligence being drawn arbitrarily, in such a way to allow us to keep our “special” status. But really, are we so different? I contend intelligence should be defined as a continuum, or a multidimensional space even, not a “you have it or you don’t” type thing. It is not any one thing. It is and always will be a combination of a number of things working together. We humans have the illusion that we are one indivisible unit because we each have a singular consciousness, but our brain is largely a parallel machine, each part doing its own processing to contribute to the whole. Similarly, a strong artificial general intelligence will be an amalgamation of parts that contribute to a whole, each part seemingly unintelligent on its own, but when combined suddenly seems intelligent.

I agree with all of what you say here except that “computers are perfectly capable of inductive thought.” Inductive reasoning via computer hardware is a form of symbolic processing. I spent years working with LISP machines and I appreciate the “purity” of LISP, but the resultant inductive reasoning was often limited. (to propositional Horn clauses if I recall correctly).

 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
Posted: 08 September 2011 07:17 PM   [ Ignore ]   [ # 11 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15435
Joined  2006-02-14
traveler - 08 September 2011 05:32 PM

Computers do not do “abstract symbolic thinking.” They don’t think at all.

Well, that’s the question, isn’t it.

traveler - 08 September 2011 05:32 PM

Neurological systems are biologic and can in fact alternate between digital and analog modes and perform massively parallel “operations” that dwarf our largest connectionist models (which are arguably reductionist in their approach).

Three points:

(1) And computers dwarf our speed and accuracy. So what?

(2) What does it matter that neurological systems are biological? Is biology a magical producer of thinking? How is that supposed to work?

(3) What does it matter if a machine is digital or analog, parallel or serial? These are differences that don’t appear to make a difference.

As far as reductionist models go, that’s a positive boon, since reductionism—at least in its less greedy form—is clearly true.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 08 September 2011 09:32 PM   [ Ignore ]   [ # 12 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1920
Joined  2007-10-28

From this essay in the SEP on The Chinese Room Argument

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle.

Direct challenge to AI:

Searle’s argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind.

Conclusion:

The many issues raised by the Chinese Room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the nature of consciousness. There continues to be significant disagreement about what processes create meaning, understanding, and consciousness, as well as what can be proven a priori by thought experiments.

And from this Salon article HERE

Passing the Turing Test? grin

For the most part, it’s hard to believe that anyone would mistake one these inanely gabbing bots for an actual person. With all due respect to Watson, it’s extremely difficult for computers to participate in conversations that aren’t restricted to a standardized format (i.e., a game of “Jeopardy”). They have more information than we do, but they don’t improvise well. A few have been expertly engineered to mimic a coherent conversation as long as the human partner can be manipulated into staying on topic (in one notable example, kvetching about the Clinton administration), but if the human insists on changing the subject, the exchange quickly degenerates into non sequiturs.

Mimicry of human thought and understanding:

At best, the most successful digital manipulators of human speech (the chatbot Cleverbot and translation programs like the one used by Google) use their massive data storage and retrieval capacities to crunch huge databases of utterances originally made by actual humans. That doesn’t mean they understand what those utterances mean.

My conversation with cleverbot

What is philosophy?

It is life.

How big is the universe?

Never ending, like the song that never ends.

Is it infinite?

It has infinite bytes.

Are you human?

No I am Timelord.

You are inane.

Why do you think that?

LOL

 Signature 

I am, therefore I think.

Profile
 
 
Posted: 09 September 2011 01:31 AM   [ Ignore ]   [ # 13 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6160
Joined  2009-02-26

After reading the definition of “qualia”, IMO, the experience of high level qualia is the missing ingredient in AI as of now.
In audiosensory systems, we have automatic volume leveling.
In optical systems, we may have automatic brightness and contrast control
In olfactory systems, we may detect harmful gases.
In tactile systems, we may have softness sensing circuits.

I understand qualia to mean the physical/emotional reaction to one’s environment.

I have a little performance monitor in my computer. It tells me in alpha how it is evaluating it’s own performance. “A” all systems go. “B” a little sluggish possibly from overloading the computers resources., but functional. “C” a significant amount of irrelevant data is interfering with the ‘normal” performance, should be cleaned. “D” several inefficiencies in the system, significantly impairing the computers performance, cleaning is imperative.

In effect the computer is analyzing it’s own health (efficiency) and it is no big leap to have the computer say (D), “I feel that something is not working right”, or “I feel sick”.

Does the above qualify as qualia?  If so then one can say that in a rudimentary sense such a computer displays or alerts us to a perceived qualia and displays a form of consciousness. It is the sensitivity and sophistication with which a computer can process qualia (even though it is an artificial, non bio-chemical experience). But is there a reason why data processed electronically should be that different from data processed bio-chemically, if the resulting response (feeling and concern) are similar?

[ Edited: 09 September 2011 01:36 AM by Write4U ]
 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 09 September 2011 04:15 AM   [ Ignore ]   [ # 14 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15435
Joined  2006-02-14

Searle’s argument at this point is old as the hills, and it’s just question-begging. Sure, the guy in the room doesn’t understand Chinese, but then neither do any of your synapses understand English. And the reason it doesn’t work as an example is that in fact it’d be impossible to write a set of instructions like the ones he proposes, ones that would be simple enough for a single human to execute and yet convincing as responses in a conversation in Chinese.

Searle in fact (IIRC) seems to think that consciousness is the magical result of biological brains, which is a nonstarter.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 09 September 2011 07:37 AM   [ Ignore ]   [ # 15 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3121
Joined  2008-04-07
dougsmith - 08 September 2011 07:17 PM
traveler - 08 September 2011 05:32 PM

Computers do not do “abstract symbolic thinking.” They don’t think at all.

Well, that’s the question, isn’t it.

Yes, but as I said, we often want to separate emotion from thinking. But I don’t think that is reasonable. Since a computer has no emotion, I believe it does not “think.” Semantics.

Not that emotion could not be programmed into some form of hardware, I suppose.

traveler - 08 September 2011 05:32 PM

Neurological systems are biologic and can in fact alternate between digital and analog modes and perform massively parallel “operations” that dwarf our largest connectionist models (which are arguably reductionist in their approach).

Three points:

(1) And computers dwarf our speed and accuracy. So what?

But that’s backwards. We are not trying to be like machines, we are trying to make machines think like us. So the fact that current computer architectures are vastly different from our brains is noteworthy - especially since neural networks were created to mimic the brain.

(2) What does it matter that neurological systems are biological? Is biology a magical producer of thinking? How is that supposed to work?

I will use your logic here and say well, that’s the question, isn’t it. Because we are so very far from understanding the brain, it does appear to be a magical producer of thinking. Think autistic savants. We don’t know how that is supposed to work.

(3) What does it matter if a machine is digital or analog, parallel or serial? These are differences that don’t appear to make a difference.

It does not “matter” whether a machine is digital or analog, parallel or serial, plugged in or not. But if you expect a task to be performed by current architectures, the task may not be possible serially (data capture, for example). A 4-bit digital machine will not provide the accuracy of a calibrated analog machine. Why do you think these differences do not make a difference?

As far as reductionist models go, that’s a positive boon, since reductionism—at least in its less greedy form—is clearly true.

I’m going to argue reductionist models with a philosopher? no.  smile

edited html…

[ Edited: 09 September 2011 07:40 AM by traveler ]
 Signature 

Turn off Fox News - Bad News For America
(Atheists are myth understood)

Profile
 
 
   
1 of 13
1