2 of 2
2
What is Rational?
Posted: 20 September 2007 11:14 AM   [ Ignore ]   [ # 16 ]
Member
Avatar
RankRankRankRankRank
Total Posts:  102
Joined  2007-09-14

Golly!  This is nifty!

From an epistemological standpoint, defining “rational” is a slippery thing.  Even when an operative definition (or less rigorously, a generally agreed broad concept) can be made, folks still get into problems.  For aeons economists presupposed that people behaved rationally when making choices about which resources to acquire and relinquish.  Then they found that most people make “irrational” choices because of emotional motivations - which motivations may include revenge, aggression, or short-term satisfaction.

In my own struggles with this (and related) notions, I’ve dreamed up a kind of hierarchy of rationality which I hope to stuff into a book on which I’m working.  I since discovered that others have addressed this issue similarly (google “DIKW”, though I have confidence that my definitions are more rigorous (cocky bastard, eh?) - VERY briefly, it goes like this:

1) Data - symbolic syntax conforming to a grammar (could be numbers, letters, genes)

2) Information - A correspondence between data and an internal or externally observable properties and processes

3) Knowledge - Valid causal relations between interacting entities , specifically represented by relationships between the information correlated with such entities

4) Wisdom - The application of knowledge to successfully predict/plan a course of action leading to a desired outcome, or similarly to infer a plausible or verifiable causal chain explaining a past or presently observed state

5) Enlightenment - Appropriate application of wisdom (e.g. valid cause-and-effect post- or pre-diction) to novel systems and contexts apart from those of original application.

To state it more prosaically, being a store of factual information makes one smart, but not wise.  Wisdom is recognized (even colloquially) as being correct prediciton or postdiction, with “wiser” people being those who can correctly predict more complex phenomena or deal with complex systems.  Enlightenment involves intellectual leaps, whereby metaphors, hueristics, or more formal (e.g. mathematical/physical) descriptions of causal relationships are applied with success to brand new contexts.

A real problem science and society faces is the complexity of the systems regarding which we are increasingly obliged to exhibit wisdom.  The causal relationships are so numerous, and so prone to feedback and non-linearities (“chaos”), not to mention practically inscruitable (whereby we simply cannot know all the interacting pieces or how they interact) - that we are becoming overwhelmed. 

As an earlier post mentioned statistics (of which thermodynamics is a special application) can offer some shortcuts, as can the “best guess” and averaging tendencies of democratically tallied prediciton.  However, the latter depends upon a relatively fascile set of decision-makers (or guessers) - e.g. ones with valid, though perhaps individually incomplete knowledge.  Otherwise, our choices are essentially random.

The need for an informed constituency as a prerequisite to a functioning democracy was understood by our Founders, and even Jefferson - who tended to champion anarchy, recognized that education was of paramount importance.  Certainly it is not just education, but education about valid causal relationships and wise courses of action that matters most.  Science embodies exactly this kind of approach, with the added notion of institutionalized skepticism.

Thus, I suspect that it is more effective and clear to talk about a scientific method of understanding and acting in and on our world, than to talk about the more slippery notion of “rational”.

Profile
 
 
Posted: 25 September 2007 12:14 AM   [ Ignore ]   [ # 17 ]
Member
Avatar
RankRankRankRankRank
Total Posts:  112
Joined  2007-09-16
tscott - 20 September 2007 11:14 AM

Golly!  This is nifty!

From an epistemological standpoint, defining “rational” is a slippery thing.  Even when an operative definition (or less rigorously, a generally agreed broad concept) can be made, folks still get into problems.  For aeons economists presupposed that people behaved rationally when making choices about which resources to acquire and relinquish.  Then they found that most people make “irrational” choices because of emotional motivations - which motivations may include revenge, aggression, or short-term satisfaction.

In my own struggles with this (and related) notions, I’ve dreamed up a kind of hierarchy of rationality which I hope to stuff into a book on which I’m working.  I since discovered that others have addressed this issue similarly (google “DIKW”, though I have confidence that my definitions are more rigorous (cocky bastard, eh?) - VERY briefly, it goes like this:

1) Data - symbolic syntax conforming to a grammar (could be numbers, letters, genes)

2) Information - A correspondence between data and an internal or externally observable properties and processes

3) Knowledge - Valid causal relations between interacting entities , specifically represented by relationships between the information correlated with such entities

4) Wisdom - The application of knowledge to successfully predict/plan a course of action leading to a desired outcome, or similarly to infer a plausible or verifiable causal chain explaining a past or presently observed state

5) Enlightenment - Appropriate application of wisdom (e.g. valid cause-and-effect post- or pre-diction) to novel systems and contexts apart from those of original application.

Hi TScott!

I love this!
Based on the above, where would you put the social science of Economics?

It seems that in a field as large as economics, there are going to be a number of people, and groups of people, that have different levels of understanding on a wide range of topics.

tscott - 20 September 2007 11:14 AM

To state it more prosaically, being a store of factual information makes one smart, but not wise.  Wisdom is recognized (even colloquially) as being correct prediciton or postdiction, with “wiser” people being those who can correctly predict more complex phenomena or deal with complex systems.  Enlightenment involves intellectual leaps, whereby metaphors, hueristics, or more formal (e.g. mathematical/physical) descriptions of causal relationships are applied with success to brand new contexts.

A real problem science and society faces is the complexity of the systems regarding which we are increasingly obliged to exhibit wisdom.  The causal relationships are so numerous, and so prone to feedback and non-linearities (“chaos”), not to mention practically inscruitable (whereby we simply cannot know all the interacting pieces or how they interact) - that we are becoming overwhelmed. 

As an earlier post mentioned statistics (of which thermodynamics is a special application) can offer some shortcuts, as can the “best guess” and averaging tendencies of democratically tallied prediciton.  However, the latter depends upon a relatively fascile set of decision-makers (or guessers) - e.g. ones with valid, though perhaps individually incomplete knowledge.  Otherwise, our choices are essentially random.

The need for an informed constituency as a prerequisite to a functioning democracy was understood by our Founders, and even Jefferson - who tended to champion anarchy, recognized that education was of paramount importance.  Certainly it is not just education, but education about valid causal relationships and wise courses of action that matters most.  Science embodies exactly this kind of approach, with the added notion of institutionalized skepticism.

Thus, I suspect that it is more effective and clear to talk about a scientific method of understanding and acting in and on our world, than to talk about the more slippery notion of “rational”.

I agree.  Clearly education, especially education about valid causal relationships and wise courses of action is highly valuable.  I also agree that science does a good job of deducing what these relationships are.

That said, there seems to be a great big gap between what science can tell us about what the effective ‘process’ is to produce an effect and a ‘rational purpose’ for that effect. 

Take, creating nuclear energy and the rational purpose for that energy.  Clearly to-date nuclear energy has been employed in a number of very different ways, and humanity has a fair scientific understanding of nuclear energy. There are a fair number of people that would argue that these technologies (that science has given us) have rational purposes that are “important to them”.  But,  certainly there are going to be people that disagree with the importance of these purposes.  And as a result, argue that the technology isn’t being employed effectively.

I think one of my concerns with switching the topic from ‘what is rational?’ to ‘what is effective?’ is that the second focuses on improving technologies, and the first keeps the focus on the individual.  Why keep the focus on the individual and not on the technology? Well, I think that its much, much easier to define an effective process to employ technologies once we’ve decided the purpose of their use.  And, I have to assume that science is unable to discover the purpose of their use.  Ie, people are going to have to decide the end use of all this cool technology we are creating.  Again just a personal thought, for my own personal purposes, there is way more than enough technology out there to meet any and all personal needs I can imagine coming up in the future, with perhaps the exception of medical technologies that would help me/others avoid some of the inevitable inconveniences of mortality.

That said, to what extent is the avoidance of pain, really ‘rational’? Or, do I just skip that question and go straight for the quest for effective avoidance?  And, what happens when we skip the ‘rational’ question on ‘nuclear energy’, and go straight for the effective question…..

It seems that if we leave a vacuum in the ‘what is rational’ step of the equation, that its more than likely to get filed with someone else’s definition of a rational use, say, ‘national security is a rational use of nuclear energy’, in which case, science ends up looking for the most effective way to national security via nuclear energy.  Science has, of course, succeeded in improving the effectiveness of nuclear energy, to the point where the policy is now, “Mutual Assured Destruction”, which the Nash equilibrium predicts is an effective policy, as long as all parties are as large as countries.  However, we now predict that not all parties are as large as countries, and as a result, MADness is no longer an effective policy.  Whoops!  And, we have made the risks higher than what they may have been if we had not spent so much time on the effectiveness side of the equasion….

So, maybe the real question I’m looking for an answer to what the procedure is by which we determine the rational uses for all that science is giving us?

What is the scientific method equivalent for discovering the ‘rational purpose’ question?  And, can we do so at a rate faster than the scientific method is discovering technologies that are not without inherent inconvenience.

I am in no way saying that people that are doing the science need to slow down, in fact the faster the better, but if there are those that are coming up with the technologies, where are those that are coming up with the rational purposes?

-baloo

[ Edited: 25 September 2007 01:52 AM by Baloo ]
 Signature 

Look for the bare necessities
The simple bare necessities
Forget about your worries and your strife
I mean the bare necessities
Old Mother Nature’s recipes
That brings the bare necessities of life.

Profile
 
 
Posted: 25 September 2007 01:50 AM   [ Ignore ]   [ # 18 ]
Member
Avatar
RankRankRankRankRank
Total Posts:  112
Joined  2007-09-16
dougsmith - 20 September 2007 08:01 AM

There is a famous philosophical tale called “Buridan’s Ass”, about a donkey placed equidistant between two identical bales of hay. Since there was no reason to choose one over the other, the donkey was unable to choose and hence starved to death between them. This is meant as an illustration that it is possible to be, in a sense, too rational; that is, too intent on gaining perfect insight into a matter before beginning to act. Sometimes action for its own sake is necessary, even on imperfect information.

However, if what you are suggesting is something more limited, pointed at particular issues where you feel that humans routinely make mental errors, then you are certainly right that critiquing and rethinking may be useful. This is the sort of issue that got Daniel Kahnemann and Amos Tversky to start their work on cognitive biases. Their studies of these biases, however, were based in part on extensive testing of human subjects in cognitive psychology. That is, they weren’t simply doing theory.

Hi Doug!

Both of these links were very interesting!

Action for its own sake, seems like an interesting concept.  And I agree there are clearly cases where it has value.  But, I would be very interested in seeing the statistical probability of causing ‘no harm’ when taking action for action sake.  It seems like that we would then get into a discussion of the types of actions that can be taken without knowing the effects of that action.

Cognitive Biases, seem to suggest that we would expect is fair degree of unpredictability in any action that doesn’t happen in a controlled environment that has been tested several times before, and recorded properly.

I guess one of my concerns is that I often hear attacks that an organization(group of people) is dangerous to society, or at minimum worth opposing, because its ‘irrational’.  A fair number of Democrats think republicans are irrational, a fair number of Republicans think that Democrats are irrational.  a fair number of Theists think that Atheists are irrational, and a fair number of Atheists think that Theists are irrational, a fair number of Capitalists think that non-capitalists are irrational, and the reverse, etc, etc.

My personal thought is that, for example, Democrats are ir(republican)rational, or lacking ‘republican’ rationality, or lacking an understanding of republican logic, and as a result, ineffective in logically appealing to republicans. And, that Republicans are ir(democrat)rational or lacking ‘democrat’ rationality, or lacking an understanding of democrat logic, and as a result ineffective in logically appealing to democrats.

I have a hard time making the assumption that either side (of any of the above) seriously believes that they have all the answers, or seriously believe they are able to definitively say that in all cases and in all circumstances they have superior answers to all the questions that life throws at us.  More often they not, I think they say, they have ‘better’ answers to the ‘more important’ questions without always acknowledging that both ‘better’ and more important’ are highly subjective descriptors. Each group also has some degree of variation of thought within it, though they seem to portray a much smaller fraction of the variety to those outside their group.

In contrast, I have a much easier time making the assumption that the various groups start out with different premises, while using some level of rationality, and come to different conclusions.

If we make the assumption that it’s in everyone’s interest in these groups working through the effects of their differences via discussions (verbal/written/video/etc) as opposed to some other way, as opposed to not working through their differences, I think there is a very different type of discussion that might come from both sides of a question assuming that the other side has a ‘different set of premises’, and the type of discussion that might come from both(all) sides coming to a discussion table with the assumption that all others are ‘irrational’, and unable to come to mutually beneficial conclusions given new information.

This assumption that both sides are not ‘irrational’ has two basic parts, one that is easy to test ‘both sides have different premises’, not sure that anyone would oppose that premises as much as the second, ‘both sides employ some level of rationality (high enough to be considered not-irrational)’.  That said, to test the second premise, I feel like I would need a non-bias definition of ‘rational’, the problem being that both sides have a high incentive to suggest a ‘definition of rational’ that highly favors their own particular set of premises.

When I was learning Japanese there were a number of dictionaries where people had gone to a great deal of effort in defining nouns, verbs, adjectives, etc, etc, and the grammar that mapped from Japanese to English(and many other languages).  The end result being a methodology that allows both sides to keep using their own language within their respective groups, while still being able to communicate effectively across groups.

Logic seems like the rule set (much like grammar) that needs to be translated across the above groups, and many others.  And then working back through that grammar to parse premise from value, and value from assumption (for each of the groups), noting when there are not equalvilents found, much like you can use grammar to break English or Japanese into nouns, verbs and adjectives, etc, and note when there are not equalvilents found on one side or the other.

This is obviously going to take some serious collaboration between people who are well versed in each respective group’s logic, lingo and culture.  As well as the help of a number of linguists (logic masters) that work with each of people to accurately build the mapping. 

So, how do we decide when another party is truly ‘irrational’ and when they simply have a different variation of logic and a unique set of premises?

-baloo

 Signature 

Look for the bare necessities
The simple bare necessities
Forget about your worries and your strife
I mean the bare necessities
Old Mother Nature’s recipes
That brings the bare necessities of life.

Profile
 
 
Posted: 25 September 2007 07:43 AM   [ Ignore ]   [ # 19 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15343
Joined  2006-02-14

Well, there aren’t two sets of logic or rationality, one for Democrats and the other for Republicans. They both use the same logic, the same sorts of argumentative moves. (And so too theists, atheists, capitalists, anti-capitalists, etc.)

The difference between them, as you note, is in the premises they accept as true. In some particular thorny cases, it’s in what they accept as evidence for their conclusions.

I do see where you’re going. What you’d need is a group of moderates who were close enough on enough of the particulars to thrash out the precise arguments they were making, and in particular to nail down their premises. Then you’d get arguments on the premises, and eventually arguments on what counted as good evidence and what did not.

There’s a problem, though. This sort of procedure can work in a scientific study because scientists are (as a rule) very careful to be precise about the nature of their claims. They make very limited and clear assertions, such as that compound X eradicates fungus Y under conditions A,B,C. If two scientists disagree about the evidence, they can run the experiment again; and since the claim is limited and clear, they know what it would be to run the experiment again.

In politics, discussion of religion, et cetera, the claims are usually very vague and changeable. Often they depend on differences of terminology which are themselves hard to reconcile. The claims are not limited but usually grandiose, in ways that can never be even largely nailed down by evidence. So, a standard tactic between theists and atheists is what counts as God, or what counts as faith. You will have differences of opinion on all of them, as well as on what counts as evidence for and against any particular claim.

So while I do think the procedure of getting moderates together to hash out issues is a good one (indeed, it’s essential in any political system to have a group of centrists to bridge differences and make compromise), nevertheless I would not expect that this would really solve or settle anything for very long. Just to start with, the fringe elements on both sides would almost certainly reject any bridging compromise as false. They would reject the premises or claim that the agreed-to evidence was inconclusive.

I don’t mean this to be a council of despair. There is good work one can do in hashing out differences with serious people. But I would not enter into the process expecting it to be solvable. Disagreement and hence compromise will always exist.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
Posted: 27 September 2007 12:14 AM   [ Ignore ]   [ # 20 ]
Member
Avatar
RankRankRankRankRank
Total Posts:  112
Joined  2007-09-16
dougsmith - 25 September 2007 07:43 AM

Well, there aren’t two sets of logic or rationality, one for Democrats and the other for Republicans. They both use the same logic, the same sorts of argumentative moves. (And so too theists, atheists, capitalists, anti-capitalists, etc.)

The difference between them, as you note, is in the premises they accept as true. In some particular thorny cases, it’s in what they accept as evidence for their conclusions.

I do see where you’re going. What you’d need is a group of moderates who were close enough on enough of the particulars to thrash out the precise arguments they were making, and in particular to nail down their premises. Then you’d get arguments on the premises, and eventually arguments on what counted as good evidence and what did not.

Yep, exactly.

dougsmith - 25 September 2007 07:43 AM

There’s a problem, though. This sort of procedure can work in a scientific study because scientists are (as a rule) very careful to be precise about the nature of their claims. They make very limited and clear assertions, such as that compound X eradicates fungus Y under conditions A,B,C. If two scientists disagree about the evidence, they can run the experiment again; and since the claim is limited and clear, they know what it would be to run the experiment again.

In politics, discussion of religion, et cetera, the claims are usually very vague and changeable. Often they depend on differences of terminology which are themselves hard to reconcile. The claims are not limited but usually grandiose, in ways that can never be even largely nailed down by evidence. So, a standard tactic between theists and atheists is what counts as God, or what counts as faith. You will have differences of opinion on all of them, as well as on what counts as evidence for and against any particular claim.

A) I think you give science too much credit here, simply because there some scientific disagreements that can be tested doesn’t mean that all scientific disagreements can be tested, or even that they are tested, and when they are tested that both sides of an scientific argument readily agree with the evidence of the test.

Take evolutionary science, or a claim that birds evolved from dinosaurs, how again do you test that one?  You can’t so you look for evidence, say DNA, bones, the fossil record, etc, etc, and then scientists take various positions in relation to the evidence and they see which makes the most sense to them.  Surely the big questions of almost any field of science are called ‘questions’ because the answers to satisfy all parties have yet to be found.

B) I think you give too little credit to politics and religion, just because there are some that make vague claims, and because there are a high number of disagreements, does not mean that there are not some very clear patterns in terms of what each group values.  Again, what is visible from outside the group isn’t always what is visible from inside the group.

C) I also wonder if we wouldn’t benefit from a distinction between ‘what a group values’ and ‘the thought process that a group uses to try and get what they value’.

Take a very sticky disagreement like ‘does God exist?’ between theists and atheists.  One side claims one thing the other the other.  There is clearly one approach where both sides can have a very long discussion where both sides get no where. 

But I have a very hard time believing that both sides of this question are interested in their particular answer for reasons that don’t extend beyond that basic question.  More often than not, I see people, on both sides, trying to use the answer to this question as a ‘premise’ to a second set of logic, that has a concussion that they are interested in.  But even that conclusion is often a premise for another set of logic, and another conclusion, which is used as a premise for something else.  This continues until the person comes to a ‘need’ (in the Adam Smith use of the term) that the person believes is either i) at risk, or ii) in a conditional position to obtain.  And, they (either directly or indirectly) (consciencely or unconsciencely) (in a single setting or over a very long period of time) link their own answer to the ‘does God exist?’ question to the probability gaining or losing that need (or set of needs).

There are millions of beliefs that we don’t share with each other, the reason that we focus on a few, is because these few are more directly linked to the most personal value of those concerned on both sides. i.e. You don’t have a serious disagreement until there is a disagreement and the value that both sides put in the question is very high.

dougsmith - 25 September 2007 07:43 AM

So while I do think the procedure of getting moderates together to hash out issues is a good one (indeed, it’s essential in any political system to have a group of centrists to bridge differences and make compromise), nevertheless I would not expect that this would really solve or settle anything for very long. Just to start with, the fringe elements on both sides would almost certainly reject any bridging compromise as false. They would reject the premises or claim that the agreed-to evidence was inconclusive.

So I agree that the fringe of both sides would reject the premises of the other sides and they would most likely reject any agreed-to evidence, which is why I think this is the wrong goal.  I see this as if a group of linguists had the goal to determine which of all languages is the “best”.  And then when that question was answered, seeking to teach everyone else in th World the “best” language.  “WE the moderate linguists have determined the best language, and now you all must learn”  So, yeah, don’t think this would work with language, and I don’t think this would work with politics, religion, etc, etc.

But, this isn’t what linguists do, what they do, is seek to build systems that allow both sides to keep their own languages and still have highly accurate exchanges of both meaning and value.

dougsmith - 25 September 2007 07:43 AM

I don’t mean this to be a council of despair. There is good work one can do in hashing out differences with serious people. But I would not enter into the process expecting it to be solvable. Disagreement and hence compromise will always exist.

That disagreement and compromise will always exist is not really a bad thing.  If anything, this need for continued conversation is the one thing that has any hope of keeping humanity intact.

What I do think would be a good things is a methodology of communication that allows groups in disagreement to discuss issues of importance in a way that maximizes the outcomes for both sides.  (as opposed the the current methodologies that don’t seem to be very productive) And like a dictionary which is a set of words/ideas that help translate between people of different languages, between people that have very little in common, and people that have very little shared experience, I think there could be a set of words/ideas that could be set-up that allowed people of different politics, of different religions, of different experiences to better communicate.

Back to the topic of the differences between science and politics, etc.  While the scientific method is clearly the best methodology to discover truth, it doesn’t seem to be a useful methodology of discovering what we do with the truth that has been discovered. 

I’m going to switch into economist mode here for a minute… the value of the discovery of truth is directly linked to the value of the uses of that same truth.  If we want people to value the discovery of truth, they will need to see the value in the uses of that truth.  The total value of the uses of truth is directly linked to that sum of the value that all people individually gain from the uses of that truth.  Few uses of truth provide equal value to all persons.  If a person gains little or no value from the uses of newly discovered truth, they will see little or no value in that truth’s discovery.  If a person sees no value in the uses of a truth, they should also be expected to not value its discovery.

If we don’t demonstrate the value of the uses of newly discovered truth, we should not expect people to value the scientific method.  If we want people, or groups of people, to increase their evaluation of the value of the scientific method, we will need to demonstrate the value in the discovery of truth.  If we want people, or groups of people, to increase the the value they see in the discovery of truth, then we have the obligation to demonstrate the value of the uses of that soon (or not-so-soon) to be discovered truth.  That said, we are going to have a very difficult time demonstrating value to people, and groups of people, if we don’t know what they value.  Serious studies that increase our understanding of what given groups of people value, increases the probability that we will be able to demonstrate the value of the discovery of truth, and thus the value of the efficient discovery of truth, ie the scientific method.

-baloo

[ Edited: 27 September 2007 12:30 AM by Baloo ]
 Signature 

Look for the bare necessities
The simple bare necessities
Forget about your worries and your strife
I mean the bare necessities
Old Mother Nature’s recipes
That brings the bare necessities of life.

Profile
 
 
Posted: 27 September 2007 11:38 AM   [ Ignore ]   [ # 21 ]
Administrator
Avatar
RankRankRankRankRankRankRankRankRankRank
Total Posts:  15343
Joined  2006-02-14
Baloo - 27 September 2007 12:14 AM

A) I think you give science too much credit here, simply because there some scientific disagreements that can be tested doesn’t mean that all scientific disagreements can be tested, or even that they are tested, and when they are tested that both sides of an scientific argument readily agree with the evidence of the test.

Take evolutionary science, or a claim that birds evolved from dinosaurs, how again do you test that one?  You can’t so you look for evidence, say DNA, bones, the fossil record, etc, etc, and then scientists take various positions in relation to the evidence and they see which makes the most sense to them.  Surely the big questions of almost any field of science are called ‘questions’ because the answers to satisfy all parties have yet to be found.

Well, there’s more than one way to test a hypothesis. At any rate, the important thing is to gather evidence. One form of evidence involves doing experimentation. But not all sciences are experimental sciences. Some are historical, such as archaeology or evolutionary biology. (Although to be fair, one can do experiments in evolutionary biology as well, working with quickly evolving organisms).

But at any rate, evolutionary biologists can and do work out their differences based on the evidence, as you say. Really that’s no different in kind from doing so based on experiment. One might well say that any archaeological or paleontological dig is an experiment, with hypotheses about what one will discover and either confirmatory or disconfirmatory evidence revealed by the experiment.

Baloo - 27 September 2007 12:14 AM

B) I think you give too little credit to politics and religion, just because there are some that make vague claims, and because there are a high number of disagreements, does not mean that there are not some very clear patterns in terms of what each group values.  Again, what is visible from outside the group isn’t always what is visible from inside the group.

I’m not nearly so sanguine. In my experience, one can only get any sort of coherent answer to “what the group values” by asking as small subset of very smart clear thinkers to tell you. But having done that, if you then were to ask “the group” what they thought of the answer provided by their clearest thinkers, you would get disagreement. That is, unless you were talking about some theocratic cult, where “the group” was established by people looking to let someone else do the believing for them.

I think there’s the illusion of cohesion and “clear patterns” because each of us tends to associate with like-minded individuals within our “groups”, and are hence less aware of the disagreement outside our own group of friends.

At any rate, I’d want some evidence of these “very clear patterns” of groupthink before I’d take them as demonstrated. And even so, the issues of vagueness and mutability would remain.

Baloo - 27 September 2007 12:14 AM

But, this isn’t what linguists do, what they do, is seek to build systems that allow both sides to keep their own languages and still have highly accurate exchanges of both meaning and value.

Not sure I get your analogy ... linguists aren’t like mediators. They are engaged in a descriptive study of the structure of language. The people focused on the accurate exchange of meaning are translators.

Baloo - 27 September 2007 12:14 AM

What I do think would be a good things is a methodology of communication that allows groups in disagreement to discuss issues of importance in a way that maximizes the outcomes for both sides.  (as opposed the the current methodologies that don’t seem to be very productive) And like a dictionary which is a set of words/ideas that help translate between people of different languages, between people that have very little in common, and people that have very little shared experience, I think there could be a set of words/ideas that could be set-up that allowed people of different politics, of different religions, of different experiences to better communicate.

Honestly, it’s a worthy idea, but I don’t think it would work. It would slow discussion down to a crawl, and nobody would feel that the extra layer of interference was necessary. It would also raise derivative arguments about whether the translations were accurate.

Put another way, if you feel like that is a task you want to undertake, I’m pulling for you. But I’ll bet you that the best method is simply to have the moderates talk together by themselves. I don’t think they really need mediators, and to the extent that they do, they’re likely to reject them anyway. But perhaps there is some niche in which mediation might be useful.

Baloo - 27 September 2007 12:14 AM

Back to the topic of the differences between science and politics, etc.  While the scientific method is clearly the best methodology to discover truth, it doesn’t seem to be a useful methodology of discovering what we do with the truth that has been discovered. 

I’m going to switch into economist mode here for a minute… the value of the discovery of truth is directly linked to the value of the uses of that same truth.  If we want people to value the discovery of truth, they will need to see the value in the uses of that truth.  The total value of the uses of truth is directly linked to that sum of the value that all people individually gain from the uses of that truth.  Few uses of truth provide equal value to all persons.  If a person gains little or no value from the uses of newly discovered truth, they will see little or no value in that truth’s discovery.  If a person sees no value in the uses of a truth, they should also be expected to not value its discovery.

If we don’t demonstrate the value of the uses of newly discovered truth, we should not expect people to value the scientific method.  If we want people, or groups of people, to increase their evaluation of the value of the scientific method, we will need to demonstrate the value in the discovery of truth.  If we want people, or groups of people, to increase the the value they see in the discovery of truth, then we have the obligation to demonstrate the value of the uses of that soon (or not-so-soon) to be discovered truth.  That said, we are going to have a very difficult time demonstrating value to people, and groups of people, if we don’t know what they value.  Serious studies that increase our understanding of what given groups of people value, increases the probability that we will be able to demonstrate the value of the discovery of truth, and thus the value of the efficient discovery of truth, ie the scientific method.

This is one tactic that Neil Tyson takes with science: explain to the people how useful it is in daily life—how it cures disease, makes life easier and makes us wealthy. I do think that’s a good argument, and certainly as Tyson makes clear, it’s an argument that also appeals to very religious right wingers. Since those are the biggest threat to science funding, it’s important to have a rhetorical strategy that they can take to heart.

 Signature 

Doug

-:- -:—:- -:—:- -:—:- -:—:- -:—:-

El sueño de la razón produce monstruos

Profile
 
 
   
2 of 2
2