1 of 3
1
A Computational Theory of Consciousness
Posted: 10 June 2011 04:35 PM   [ Ignore ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

While working on a hobby AI project, I had some thoughts about consciousness.

Through introspection, I observe that consciousness seems to be that part of the brain that has as inputs:

1. Processed sensory data
2. Memories (recorded processed sensory data) (long and short term)
3. Inner voice
4. Emotions
5. Intuition (the results of unconscious processing of processed sensory data) (same as emotions?)

and has as outputs:

1. Muscle control
2. Memory recall control
3. Imagination control
4. Inner voice control (same as imagination control?)
5. Attention control (what inputs to ignore and what inputs to process)

Inner voice is that voice that can recite words in your mind, form arguments, do math, etc.

The actual content of consciousness would be the neurological processing on the inputs, to arrive at the outputs. Note that consciousness can get different inputs simultaneously (ex: inner voice + memories + processed sensory data), and have multiple simultaneous outputs (ex: muscle control + inner voice control). This would suggest that all the inputs are processed in a central location so that they may be considered in conjunction when deciding on output.

Also note that some of the outputs recurse back into the inputs. Reasoning falls under inner voice, I think. Reasoning about input before deciding on an output seems to be a recursive function of consciousness; consciousness can generate the inner voice (by perhaps calling on the language part of the brain) as well as “hear” it, in a looping fashion until a decision is reached and something is output.

Memorizing a thought can be considered the same as continually recalling it on purpose, with the actual memorization happening unconsciously (as in outside of consciousness, not as in knocked out).

In this way, consciousness is comparable to a real-time computer algorithm. It takes inputs, processes them, and sends outputs.

Does anyone else feel they have the same “inputs” and “outputs” to their consciousness? Does anyone feel they have other inputs and outputs? What about the interpretation of the method consciousness uses to process inputs and arrive at outputs - does yours feel the same or different? Are there any neuroscientists here that may be able to shed light on whether or not this model has any supporting physical evidence?

Edit: attention control seems to be the most interesting output of consciousness since it directly controls consciousness itself. (You can choose to pay attention to whatever you want). Yet it doesn’t seem to qualify as an input since you cannot pay attention to what your consciousness is paying attention to (try it, it hurts), perhaps because attempting to pay attention to what your consciousness is paying attention to is self-defeating; the moment you try to do it you’re paying attention to paying attention, which is kind of like paying attention to nothing.

[ Edited: 11 June 2011 12:45 AM by domokato ]
 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 10 June 2011 11:05 PM   [ Ignore ]   [ # 1 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6162
Joined  2009-02-26

I have no expertise, but your concepts sound reasonable to me.

Except for emotion, all the listed functions could be achieved in AI.

I wondered what would be the fundamental characteristic of “consciousness” and if that could be duplicated in a AI.
Seems to me that our consciousness works symbolically. This would be where the processed information results in an emotion, which IMO is the differentiating factor.
If human morals are based on that which is either “good” or “bad”, can a symbolic representation be programmed?  Can you make a computer (Hal) want to feel “better"and try to avoid feeling “bad”?

[ Edited: 16 November 2011 04:57 PM by Write4U ]
 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 13 June 2011 10:14 AM   [ Ignore ]   [ # 2 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10
Write4U - 10 June 2011 11:05 PM

Except for emotion, all the listed functions could be achieved in AI.

Emotion should be achievable. It’s just a matter of processing sensory input a little and generating a “gut reaction”.

If human morals are based on that which is either “good” or “bad”, can a symbolic representation be programmed?

Yes, I think so. In the above model, this would probably fall under intuition (moral intuitions) and inner voice (for analyzing more complex moral situations).

Can you make a computer (Hal) want to feel “better"and try to avoid feeling “badly”?

In the above model, feeling bad or good may come from the emotion-generating part of the AI (or memories). The inner voice can be used to rationalize the emotions and decide on goals. All the inputs to consciousness together can be used to determine the best course of action to achieve those goals. Then the AI can use whatever outputs it has to achieve them (actuators, internet queries, text output, voice synthesis, whatever).

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 13 June 2011 05:17 PM   [ Ignore ]   [ # 3 ]
Moderator
RankRankRankRankRankRankRankRankRankRank
Total Posts:  5551
Joined  2010-06-16

Quoting W4U:

Can you make a computer (Hal) want to feel “better"and try to avoid feeling “badly”?

  Of course you can.  I just had a laptop computer die so I bought a new one.  The new touch pad wouldn’t respond to my fingers (that is, it was not feeling the touch) so it was feeling badly.  If it had become upset that it couldn’t feel my finger touch, then it would be feeling bad

Sorry, it’s one of my nits - people using an adverb instead of an adjective, possibly because it sounds more erudite.

Occam

 Signature 

Succinctness, clarity’s core.

Profile
 
 
Posted: 13 June 2011 05:45 PM   [ Ignore ]   [ # 4 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Did you see Kiss Kiss Bang Bang? I was confused about bad vs badly after seeing that. Thanks for clearing it up

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 13 June 2011 11:02 PM   [ Ignore ]   [ # 5 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6162
Joined  2009-02-26
Occam. - 13 June 2011 05:17 PM

Quoting W4U:

Can you make a computer (Hal) want to feel “better"and try to avoid feeling “badly”?

  Of course you can.  I just had a laptop computer die so I bought a new one.  The new touch pad wouldn’t respond to my fingers (that is, it was not feeling the touch) so it was feeling badly.  If it had become upset that it couldn’t feel my finger touch, then it would be feeling bad

Sorry, it’s one of my nits - people using an adverb instead of an adjective, possibly because it sounds more erudite.

Occam

Thanks for the correction.
Please note that I started the post with a disclaimer about my erudition. Moreover, English is my second language.

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 15 November 2011 12:15 PM   [ Ignore ]   [ # 6 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3328
Joined  2011-11-04
domokato - 10 June 2011 04:35 PM

While working on a hobby AI project, I had some thoughts about consciousness.

Through introspection, I observe that consciousness seems to be that part of the brain that has as inputs:

1. Processed sensory data
2. Memories (recorded processed sensory data) (long and short term)
3. Inner voice
4. Emotions
5. Intuition (the results of unconscious processing of processed sensory data) (same as emotions?)

and has as outputs:

1. Muscle control
2. Memory recall control
3. Imagination control
4. Inner voice control (same as imagination control?)
5. Attention control (what inputs to ignore and what inputs to process)

Inner voice is that voice that can recite words in your mind, form arguments, do math, etc.

The actual content of consciousness would be the neurological processing on the inputs, to arrive at the outputs. Note that consciousness can get different inputs simultaneously (ex: inner voice + memories + processed sensory data), and have multiple simultaneous outputs (ex: muscle control + inner voice control). This would suggest that all the inputs are processed in a central location so that they may be considered in conjunction when deciding on output.

Also note that some of the outputs recurse back into the inputs. Reasoning falls under inner voice, I think. Reasoning about input before deciding on an output seems to be a recursive function of consciousness; consciousness can generate the inner voice (by perhaps calling on the language part of the brain) as well as “hear” it, in a looping fashion until a decision is reached and something is output.

Memorizing a thought can be considered the same as continually recalling it on purpose, with the actual memorization happening unconsciously (as in outside of consciousness, not as in knocked out).

In this way, consciousness is comparable to a real-time computer algorithm. It takes inputs, processes them, and sends outputs.

Does anyone else feel they have the same “inputs” and “outputs” to their consciousness? Does anyone feel they have other inputs and outputs? What about the interpretation of the method consciousness uses to process inputs and arrive at outputs - does yours feel the same or different? Are there any neuroscientists here that may be able to shed light on whether or not this model has any supporting physical evidence?

Edit: attention control seems to be the most interesting output of consciousness since it directly controls consciousness itself. (You can choose to pay attention to whatever you want). Yet it doesn’t seem to qualify as an input since you cannot pay attention to what your consciousness is paying attention to (try it, it hurts), perhaps because attempting to pay attention to what your consciousness is paying attention to is self-defeating; the moment you try to do it you’re paying attention to paying attention, which is kind of like paying attention to nothing.


Here are my thoughts:  Consciousness is not some independent entity that exists inside our brains.  It is various covert (to outside observers) behaviors that occur, primarily, at a neurological level. It involves covert verbal behavior with one’s self as the listener.  Remembering is a special kind of covert behavior as well.  It can involve re-creation of previous perceptual input as well as verbal behavior describing that re-creation. Or in the case of repeating a phrase in order to remember it, you are simply doing that specific behavior over and over until the neurological pathways that fire off for that phrase are strengthened sufficiently to be a well-ingrained behavior.  Emotions have nerological correlates which can remain covert to an outside observer, but often have outwardly observable behaviors (facial expressions, body language, crying, laughing, etc.) accompanying. Intuition is evidenced by behavior that we emit that is in response to contingencies that we are not aware of.  Paying attention to something is behavior as well.  You may “choose’” to attend to something, as perhaps you are choosing to attend to and read this sentence right now.  However, as you were attending to the last sentence and this one, your behavior (as is the case with the behavior of all organisms) is subject to internal and external contingencies (as well as your personal historical contingencies) that effect how well you are able to attend. 

So, the “inputs” for the various behaviors that occur at the neurological level that comprise what we refer to as consciousness are, I would think, all of the complex history of each individual in terms of the exposure to all of the stimulli, throughout one’s life, that was involved in setting a stage for, and selecting these increasingly developed behaviors, along with, of course, the exposure to current contingencies.

1) Perception of environmental stimuli is necessary. (In “consciousness” behaviors, the envioronmental stimuli could, at times, be other behaviors occurring at the neurological level, such as covert intraverbal behavior.) 2) A response/s to environmental stimuli is necessary.  In “consciousness” behaviors, the response/s may be only at the neurological level and not necessarily also at a motoric level.  3) Perception of reinforcing consequences (which strengthens the neurological response) is necessary. Responses that are strenthened sufficiently become part of the individual’s repertoire. Note: emotional behavior is primarily respondant behavior and thus does not require reinforcing consequences to develop. Internal verbal behavior, on the other hand is very complex operant behavior and requires a long rich history of reinforcement.

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 15 November 2011 02:20 PM   [ Ignore ]   [ # 7 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6162
Joined  2009-02-26

Sounds reasonable to me. However this type of cosciousness happens at the very lowest levels of awareness in nature and is merely a response to stimuli.
But what is abstract thinking and imagination, where we mentally create visions of things that do not or even could not exist?

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 15 November 2011 05:47 PM   [ Ignore ]   [ # 8 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3328
Joined  2011-11-04
Write4U - 15 November 2011 02:20 PM

... But what is abstract thinking and imagination, where we mentally create visions of things that do not or even could not exist?

I think it is also behavior, although the relative controlling stimuli are not as obvious.  When we are doing the behavior of seeing something that is present, it is simple visual perception of external stimuli that are present.  When we dream (during sleep) we are seeing without perceiving any externally existing stimulus.  Rather, I think what is happening is that the neuronal correlates for seeing that stimulus are firing off.  < I suspect that in dreaming while asleep, that these neuronal correlates are firing off somewhat haphazardly and that when we remember dreams that we are organizing the material into some coherent pattern.  (I also suspect that our tendency to organize percieved material into patterns is respondant, i.e, behavior that we are born with which occurs automatically.)  But I digress.>  Visualizing something that does not exist, is probably akin to seeing without the presence of external correlating stimuli, but doing so when one is awake.  This is probably a learned behavior, that is also akin to remembering behavior.  In remembering we can re-create, or approximately recreate a visual perception of external stimuli that were present but no longer are.  (It is interesting to note that remembering behavior is not some recorded replay of a past event, but rather is a new behavior each time that one remembers a particular event.  Thus our remembering is subject to change just as any other behavior.)  Visualizing things that are not present, I believe can be a behavior that one learns, and one can learn to alter the visualization. Doing this behavior “creatively” can best be done by persons who have broader experiences, i.e., multiple sources of representations of things to visualize.

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 15 November 2011 07:39 PM   [ Ignore ]   [ # 9 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6162
Joined  2009-02-26

Such as imagining a god and accepting that as real?

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
Posted: 16 November 2011 12:16 PM   [ Ignore ]   [ # 10 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3328
Joined  2011-11-04
Write4U - 15 November 2011 07:39 PM

Such as imagining a god and accepting that as real?

That could be either a delusional behavior or a faith based behavior. I think a delusional behavior is probably a respondant behavior in which something has gone haywire on a neurological level for an individual.  Whereas the faith based behavior is choosing to believe something without requirement of objective evidence for and/or in spite of objective evidence against. The latter, I think, is generally an operant behavior, that is shaped within a culture or cultures and possibly influenced by our social nature.  Although, I would not rule out that beyond the cultural contingencies that maintain faith based behaviors, there is not some inherent tendencies toward belief in a “God”.  This could be true if during our evolution, the tendency to having faith in “God”, resulted in a relative advantage for some over others in surviving to reproduction. Re: how our social nature might lend itself to human’s tendency to believe in “God”, it has occurred to me that believing in an all powerful being beyond ourselves is consistent with each of our earliest experiences in life.  We are born completely helpless and are at the mercy of and rely on such a being in our earliest experiences in a chaotic environment outside of the womb.  (This being of course is a parent or other caregiver.)  But I wonder if these early experiences in life, to some degree, can generalize to a belief in “God”, or at least help make believing in “God” feel right.

[ Edited: 16 November 2011 12:24 PM by TimB ]
 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 16 November 2011 02:22 PM   [ Ignore ]   [ # 11 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3328
Joined  2011-11-04
domokato - 13 June 2011 10:14 AM
Write4U - 10 June 2011 11:05 PM

Except for emotion, all the listed functions could be achieved in AI.

Emotion should be achievable. It’s just a matter of processing sensory input a little and generating a “gut reaction”.

If human morals are based on that which is either “good” or “bad”, can a symbolic representation be programmed?

Yes, I think so. In the above model, this would probably fall under intuition (moral intuitions) and inner voice (for analyzing more complex moral situations).

Can you make a computer (Hal) want to feel “better"and try to avoid feeling “badly”?

In the above model, feeling bad or good may come from the emotion-generating part of the AI (or memories). The inner voice can be used to rationalize the emotions and decide on goals. All the inputs to consciousness together can be used to determine the best course of action to achieve those goals. Then the AI can use whatever outputs it has to achieve them (actuators, internet queries, text output, voice synthesis, whatever).

“Intelligence’ can only be inferred by behaviors that one emits.  I think that efforts to create artificial intelligence should therefore be attempts to create something that learns in the same ways that organisms learn.  Thus you need a machine that percieves, responds, and whose responses are selected (made more likely to occur subsequently in that same percieved condition) if that response constitutes something “intelligent”.  I think efforts toward making a machine that achieve something like consciousness, needs to have a selection process for the development of responses that are comparable to early developmental milestones of verbal behavior. Check out this paper, I think it is a movement in that direction, though the computer stuff is over my head. http://www.rmac-mx.org/pdfs/vol26no2/141-158.pdf

 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 16 November 2011 03:58 PM   [ Ignore ]   [ # 12 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

Yes, many AI researchers are taking the ANN approach http://bluebrain.epfl.ch/page-56882-en.html

I like the evolutionary approach better (http://en.wikipedia.org/wiki/Genetic_programming). It is potentially more powerful. And computers have a different architecture than brains, so I’m willing to bet there is a more effective approach to intelligence than ANNs.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 17 November 2011 03:08 PM   [ Ignore ]   [ # 13 ]
Sr. Member
RankRankRankRankRankRankRankRankRankRank
Total Posts:  3328
Joined  2011-11-04
domokato - 16 November 2011 03:58 PM

Yes, many AI researchers are taking the ANN approach http://bluebrain.epfl.ch/page-56882-en.html

I like the evolutionary approach better (http://en.wikipedia.org/wiki/Genetic_programming). It is potentially more powerful. And computers have a different architecture than brains, so I’m willing to bet there is a more effective approach to intelligence than ANNs.

The “evolutionary approach” seems reasonable to me in (as far as I could understand it from the Wikipedia page) in that it allows for variations of programs to be emitted and then the most fit selected.  However if it is used only by providing program input to do some complex response, such as to make the best design of a bridge, I don’t see that directly leading to the development of an intellligent machine.

I think that the development of an intelligent machine would require a machine that can percieve across various sensory inputs, e.g., visual, auditory, and/or perhaps haptic, proprioceptive, vestibular, olfactory.).  It would require hardware that would enable various types of responding and algorhytms that enable a spectrum of responses that are then selected by virtue of “fitness”.  But the “fitness” should be in relation to what is programmed in as the equivalent a response that results in the fulfilling of a want or a need. That “fit” response should then be selected to have a higher probablitiy of occurring when the percieved input conditions are the same.  For example, I recently heard about robots, called “ostriches”, that are able to walk across ueven surfaces (if the surfaces are not too challenging).  I have no idea how they are being programmed to do that, but I assume they have visual perception of some sort, and the equivalent of some sort of vestibular sensing. Clearly they have hardware that enables them to take steps in various ways across some continuum of possible ways to take steps.  If they could also be programmed for learning as I suggested above, the taking of the most fit step responses (according to the pecieved input of terrain and the robot’s own sense of it’s movement in space) that resulted in the best consequence (a percieved recognition that it is reaching a destination, which in this case would be the robot’s progammed “want fulfillment”) would thus result (by algorythym) in the robot being more likely to take the step the same way faced with the same or similar terrain subsequently. Any and all “fit” step reponses would need to be retained along with some continuum of potential random step responses, but the most “fit” step responses would always be selected by a programmed increase in their relative probability of occurrence in similar percieved conditions. This would seem to me to be a true analog of learning,

[ Edited: 17 November 2011 03:12 PM by TimB ]
 Signature 

As a fabrication of our own consciousness, our assignations of meaning are no less “real”, but since humans and the fabrications of our consciousness are routinely fraught with error, it makes sense, to me, to, sometimes, question such fabrications.

Profile
 
 
Posted: 17 November 2011 03:49 PM   [ Ignore ]   [ # 14 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  1201
Joined  2009-05-10

To evolve intelligent agents, their environment must be such that intelligent behavior is selected. The main questions that arise are, what is intelligence? How can we select for intelligence? How can we do that automatically? How can we do that quickly? And how can the genetic mechanisms be designed to allow the agents to achieve a high level of intelligence without getting stuck in local optima?

Consider, for example, evolving a bipedal robot controller. For each individual in the population, you would have to let it control the robot for enough time to accurately judge its competency. The population should be in the thousands to ensure a diverse gene pool. So if you’re letting each individual have control of the robot for 30 seconds, and you have 1000 individuals, that’s at least 30000 seconds per generation (8 hours, 20 minutes). And you would probably have to let it run for hundreds of generations before anything useful crops up (all depending on the algorithm you use). On top of that, you would need some fitness function that could automatically judge each individual’s competency for you. And some individuals will find ways to cheat the fitness function if they can, so you have to be very careful about how that’s designed.

Simulating evolution sounds simple until you start to realize how long nature has been doing it and improving on the process. It is also a slow process. But with computing power grower ever faster, I can only see it becoming a more and more valid approach.

 Signature 

“What people do is they confuse cynicism with skepticism. Cynicism is ‘you can’t change anything, everything sucks, there’s no point to anything.’ Skepticism is, ‘well, I’m not so sure.’” -Bill Nye

Profile
 
 
Posted: 17 November 2011 04:17 PM   [ Ignore ]   [ # 15 ]
Sr. Member
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  6162
Joined  2009-02-26
domokato - 17 November 2011 03:49 PM

To evolve intelligent agents, their environment must be such that intelligent behavior is selected. The main questions that arise are, what is intelligence? How can we select for intelligence? How can we do that automatically? How can we do that quickly? And how can the genetic mechanisms be designed to allow the agents to achieve a high level of intelligence without getting stuck in local optima?

Consider, for example, evolving a bipedal robot controller. For each individual in the population, you would have to let it control the robot for enough time to accurately judge its competency. The population should be in the thousands to ensure a diverse gene pool. So if you’re letting each individual have control of the robot for 30 seconds, and you have 1000 individuals, that’s at least 30000 seconds per generation (8 hours, 20 minutes). And you would probably have to let it run for hundreds of generations before anything useful crops up (all depending on the algorithm you use). On top of that, you would need some fitness function that could automatically judge each individual’s competency for you. And some individuals will find ways to cheat the fitness function if they can, so you have to be very careful about how that’s designed.

Simulating evolution sounds simple until you start to realize how long nature has been doing it and improving on the process. It is also a slow process. But with computing power grower ever faster, I can only see it becoming a more and more valid approach.

IMO, it is a matter of memory. Theoretically a computer could learn much faster than a child as it does not need to sleep and can use 24 hours p/day to learn. The problem lies in the memory capacity and sorting structure for accessibility.

 Signature 

Art is the creation of that which evokes an emotional response, leading to thoughts of the noblest kind.
W4U

Profile
 
 
   
1 of 3
1