On the (potential) Morality of Crowds
May 11, 2015
I reposted a video on my Facebook page the other day that shows a busy intersection without traffic signals in which the traffic somehow manages to flow, albeit in a way that to most of us seems haphazard and frightening. There are many such examples one can see, coming mostly from what we think of as "developing world" countries, where the infrastructure isn't as "advanced" as we prefer: mainly in Asia and Latin America. Many people see such scenes and think "what chaos" and take comfort in their enlightened and civilized infrastructure, where everything is ordered and signals and signs tell us what to do. When I see these videos, I think how marvelous that somehow, and for the most part, traffic flows without more collisions. I am also intrigued by its moral implications. There is some interesting science as well behind traffic since it is so important in our modern world, and the studies that intrigue me most tend to show that there is actual danger in providing too many signs and signals, and that accidents (especially fatal accidents) may be reduced by use of more "passive" methods of traffic control such as roundabouts and other design decisions (as is increasingly being used in Europe) (skeptics should check out each of the links I've embedded before exclaiming their doubts).
There are a number of theories operating behind the move from active to passive traffic management. One is that too many signs are distracting to drivers. Another is that the feeling of safety afforded by traffic control devices causes reduced attention and situational awareness by drivers. I want to make a moral claim relating to it -- namely that in the absence of "top down" traffic control, the responsibility for managing a potential conflict is in the hands of each driver. There is certainly a practical reason to want to reduce accidents, and especially fatal accidents, but there seems also to be a moral reason to put responsibility in the hands of drivers. One has to do with what we call "active" responsibility, and the other to do with "passive" responsibility. Active responsibility is the responsibility a person has at the moment for making choices, and we often think of moral responsibility coming from one's freedom to make choices. We should wish to enhance active responsibility. While driving, the more active responsibility a driver feels, the more one should carefully consider one's choices (and thus, hopefully, make choices that avoid harm). After an accident, the responsibility one has for having caused the accident is often called passive responsibility, and we often evaluate that from the standpoint of their knowledge and choices at the time of the accident. We hold those people who had knowledge, yet made bad choices, more responsible than we do those who lacked knowledge through no fault of their own. By reducing the number of active, top-down controls, we force drivers to take more active responsibility, and reduce the excuses for those who in causing an accident might otherwise be apt to blame the signals and signs as opposed to their own bad judgment or negligent lack of knowledge.
All of this is, of course, a metaphor. I'm interested in traffic but I'm not consumed by it alone. Rather, I see it as a sort of model of society at large, and I harken back to the excellent and intriguing book The Evolution of Cooperation by Robert Axelrod. Axelrod was interested in how cooperation might be a long term successful strategy in general, and studied it using the Prisoners Dilemma game in an "iterated" form (in which players play it over and over and their strategies become known to others - click through the link to learn more about it as I won't use up space here to describe it). The important conclusion in a nutshell, though it is one that is still the focus of debate today, is that cooperators are more successful than defectors in the long term. I am interested in whether this is the phenomenon behind signal-less traffic's ability to flow, and ultimately whether society in general might function similarly even absent top-down controls (laws). Yes there are defectors who game and cheat the system, and they sometimes cause accidents, but in general a subtle form of cooperation is involved in every forward movement of traffic (society) in which micro-negotiations that happen to work to each individual's advantage allow the system to work even absent top-down controls. Without law, order may still emerge, cooperative behaviors evolve because they work to societal and individual advantage, and moral responsibility is taken by each player for the smooth functioning of the system. What seems like a recipe for chaos could well be the foundation for a beautiful form of freedom and order, where responsibility and cooperation are nurtured through necessity rather than dictate from above.
#1 Philip Rand (Guest) on Wednesday May 13, 2015 at 1:19am
You have a big problem with your idea because “individual” advantage and “societal” advantage are at odds with each other.
You will have to flesh out this “advantage” dichotomy, because you cannot say that they are both one and the same because clearly they are not.
#2 David Koepsell (Guest) on Thursday May 14, 2015 at 6:59am
Axelrod’s insight is that in an iterated version of the game (where people get to know you and your strategies), then to be successful one has to be seen to be a cooperator, and thus make your interests more closely coincide with those of “society” ... it’s not a one-off game, this thing called living with others. Defecting might work to short term advantage for the individual, but it backfires in the long run.
#3 Randy on Friday May 15, 2015 at 10:43pm
“accidents (especially fatal accidents) may be reduced by use of more ‘passive’ methods of traffic control such as roundabouts”
No, it’s not about one versus the other. It’s about using the right design for the right situation.
Some fool decided that an intersection of two highways (one of which is expected to become very busy once the other end of it is connected to another system) near where I live should have a roundabout. It terrifies me to drive there, so I avoid it (perhaps avoidance is one reason these things “reduce” accidents). It’s been in the news already for having semi trucks fall over because of the sharp radius and the lack of signage. And nobody knows what you’re supposed to do if someone on the inner lane wants to exit the circle. It’s a lane without a purpose other than to cause chaos and confusion.
To be sure, I love roundabouts, when they are in reasonable locations, like residential areas. They don’t belong on heavily-used highways.
#4 david koepsell (Guest) on Saturday May 16, 2015 at 5:47am
You’ll note the use of the word “may” in the statement, which admits of other possibilities in other circumstances.
#5 Philip Rand (Guest) on Sunday May 17, 2015 at 2:13am
I entirely agree with the Axelrod model.
However, “moral responsibility” in your context is that every individual in society gives willingly credit to other individuals in society that are superior, i.e. contribute more to society.
Now, this to me sounds like for you model to work all citizens must be trained to be “virtuous”.
#6 Philip Rand (Guest) on Sunday May 17, 2015 at 2:49am
That being said…given enough time (iterations) then it is likely that such an eventual outcome as you posit would occur.
#7 David Koepsell (Guest) on Sunday May 17, 2015 at 4:17am
Regarding “virtue” I don’t see how that is necessary. What is necessary is that, in the absence of external directions, each “player” makes a choice to use a cooperative strategy (exercising active reeponsibility) because it is to his or her advantage in the long run. Nothing about this requires “virtue”
#8 Philip Rand (Guest) on Sunday May 17, 2015 at 8:29am
Concerning virtue perhaps…
However, the important consequence (philosophically speaking) if one follows through with this Axelrod type of model that you are suggesting is that grammar is arbitrary…it has to be because the Axelrod model ONLY works if one considers the origin of human co-operation and applies it contingently, i.e. from Australopithecus and onwards.
Which means, we are applying an outside hypothesis, i.e. the Axelrod model to (in your case) “philosophical logic”...and in logic there are no accidents…but outside logic everything is accidental…which means that human co-operation would appear to hold of an indefinitely extended domain (here is one pitfall, i.e. not a limited environment)...suggesting that these laws of co-operation are not true propositions, but rather rules for the construction of co-operation. This is why ethical facts are imponderable.
Which means, that the Axelrod model is about the “net” we use to describe co-operative reality and not about what the “net” describes.
It is quite a complex philosophical issue…but extremely interesting…
I doubt you will be able to make head nor tails of what I have just written…however, if you look in detail the implications of the Axelrod model for philosophy you will be quite surprised.
It would be a very good avenue of research.