1 of 2
1
The Recent Talks on Physics
Posted: 06 August 2008 11:19 AM   [ Ignore ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

Necessity plus Chance

The presentation by Professor Taner Edis had a most promising title and theme, “Necessity plus Chance”, but did not deliver so much insight. There was a list of topics taught in the academic course, and as much strategic exposition as could be fit into the time. One might not wish to criticize the quality of the summary presented. But nothing departed much from convention.  No new philosophy was exposed from the possible leads not followed.

What is it that necessitates, and how is chance different from necessity?  There is certainly not a God, the Designer, to decide on necessity, since there is no physical mechanism for this sort of design work..  Contrary to Derrida and Lakoff, humans do not have the computational power to mandate their own designs as necessary, and thereby project them onto the universe.  But mathematics, not as a unified authority but as its sets of contingent possibilities, does have all of the needed power to make for necessity.  Chance then becomes the lesser and more rare sort of mathematical axiom.

Range of Expression

Then, what makes for the greater and more ubiquitous sorts of mathematical principles that end up as important physics?  Professor Karen Gipson put forward a very key phrase, “range of expression”.  But I think she meant this to be understood empirically rather than as intrinsic - a misfortune.  To be sure, intrinsic properties are more valuable for theoretical understanding.

Consider a comparison of mathematical systems, that are more or less expressive, to play actors that have mastered more or less of their lines.  It is practically a truism that actors with more roles mastered are heard more often.  So mathematical systems that include more possible expression are to be preferred for theoretical physics. And actors that require particular co-actors, implement only part of their roles or exceed them, or share an identity with other actors are heard less for those reasons.  And so it goes with the comparable mathematical systems; they are found less in good physical theory.

Clear and Distinct Ideas versus Anti-intellectualism

But Professor Gipson upheld the utility of models that are false, and of models that do not contribute to understanding.  This makes it possible to defend conventional models that have no theoretical merit and require an exercise of compensating errors in order to fit experiment.  You might think that this sort of cultural stagnation has been overcome; epicycles are an object for scorn.  But no: consider the mathematically defective models used in current physics, such as vectors and magnetic monopoles.

And again, Professor Gipson did not uphold the utility of understanding ideas in academic physics.  But Spinoza, by contrast, upheld the central importance of clear and distinct ideas, to the point of choosing them over others for further study.  Spinoza asserted that illucidity, unfounded multiplicity, and arbitrariness were among the illusions that arose from inadequate understanding of ideas. So he was the great opponent of the rampant anti-intellectualism of his time.

Liberal psychoanalysts identify early modern times, the 16th century, as psychotic in the degree of its cultural neurosis.  The enlightenment was then an attempt at recovery from this neurosis.  But the culture of the 1970s was still thought of as severely neurotic by Doctor Theodore Rubin.

So it is well advised and very productive to keep watch for and oppose outcroppings of anti-intellectualism wherever they occur, even in the departments of mathematics and physics.


Michael J. Burns

Profile
 
 
Posted: 07 August 2008 10:39 AM   [ Ignore ]   [ # 1 ]
Jr. Member
Avatar
Rank
Total Posts:  2
Joined  2007-10-22

A model cannot be “true” or “false” in science.  It’s just a matter of whether or not it explains the data.  Some models work better than others, but that doesn’t mean that they are more “true” than models that don’t work as well.  Case in point - Niels Bohr developed a model of atoms that had electrons orbiting about a nucleus, very similar to the way that planets orbit suns.  It wasn’t a good model, and he knew it.  It was a better fit for the available data than the model that scientists were working with before that.  That didn’t make it more “true”, just more useful.  You seem to be arguing that not having a model at all is preferable to having an imperfect, incomplete but still useful model.

 Signature 

It has been said that man is a rational animal. All my life I have been searching for evidence which could support this.  -Bertrand Russell

Profile
 
 
Posted: 09 August 2008 08:50 AM   [ Ignore ]   [ # 2 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

Your comment is substantial and interesting.

To say that “true” is not out there is to concede to postmodernism and to take refuge only in unreliable convention for understanding.  To say that “true” out there can not be translated into a truth in human language is actually the same as putting forward a very particular theory of the universe wherein this is not allowed. I have found that explicitly considering these possibilities is extremely productive in understanding the theory of physics.


Michael J. Burns

Profile
 
 
Posted: 09 August 2008 09:22 AM   [ Ignore ]   [ # 3 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

Let me comment on yet more substance in your reply.

Thinking subjectively, I do wish not to be bothered by models that are applied hypocritically, or are defined in a compound or hybrid style.  These do not genuinely contribute to understanding or reliable prediction in physics.

Better models do indeed exist.  Several hours of tutoring would suffice for a good orientation.  Mathematical systems do have their more or less restricted range of expression, but this range is not to be taken as subjective or empirical.  In other words, a model that is restricted by its empirical range is not really predicting anything, but a mathematical system limited only by its intrinsic range of expression does allow for useful understanding.


Michael J. Burns

[ Edited: 09 August 2008 09:39 AM by mburns ]
Profile
 
 
Posted: 09 August 2008 11:54 AM   [ Ignore ]   [ # 4 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

mburns,

There are two sides of the debate.  If you think that models are not true or false, simply useful, then you make an ontological commitment to a “random walk” through all possible theories.

OTOH, many times our mathematical intuition has been overturned by scientific discoveries.  While mathematical principles (e.g., symmetry) are powerfully predictive, sometimes they just stop working.

I would say that our intuition lies in this gap ...

Profile
 
 
Posted: 12 August 2008 11:47 AM   [ Ignore ]   [ # 5 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

Sometimes mathematical systems just stop working.  The reply is : Yes but ... !

Sometimes the premises of even the best theories can fail, as when measurements of length and time fail at small scale so that the theory of spacetime fails there.  And sometimes the premises fail to compute an answer entirely; the theory of spacetime has nothing to say about solid surfaces and colored light, nor does it say anything about temperature.

These failures are not arbitrary though.  They occur in an understandable manner for the best of theories. Only badly posed systems, such as the usual usage of vector notation, have failures that seem arbitrary. The correctness of systems badly posed in the manner that I mentioned above is dependent on coincidence.  But the theory of spacetime has a simple and unified premise that is therefore not as subject to chance.

There could not reasonably be a cosmic censor to arbitrarily interfere with the best of theories, as Spinoza realized. There is just the complexity of possible initial conditions.  This is all the more incentive to concentrate on the properly posed theories with the capacity to survive chance conditions.


Michael J. Burns

Profile
 
 
Posted: 12 August 2008 05:08 PM   [ Ignore ]   [ # 6 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

I cannot say I understand your post:

* How is vector notation badly posed, or more prone to give incomprehensible errors?

* Could you describe more precisely how a theory can be more “well-posed,” and what is the “chance” that they are subject to?

Of course one can always be more general in one’s notational premises to avoid errors of circumstance, at the expense of convenience.

Profile
 
 
Posted: 13 August 2008 02:03 PM   [ Ignore ]   [ # 7 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

“Chance”, I could say, is the competition by a mathematical system to be experienced by an observer in the face of other mathematical systems that are more well posed and also have more of an intrinsic range of expression. Most of this criterion was originated by Spinoza.

Most applications of vector notation are not well posed, but only spacetime intervals, and momentum only in the case of mechanics. The usual definition of a vector is compound in that it conflates the two types of objects, genuine vectors and 1-forms. These have respectively a dimensionality of L and 1/L, and a tensor rank of 1 contravariant and 1 covariant.  (Tensor ranks are not relative, convention to the contrary.)

There are yet other applications of vector notation that are hypocritical because they fail to match even the compound definition of a vector. The result of a cross product is a bivector, with a dimensionality of L^2 and a tensor rank of 2 contravariant antisymmetric. And nothing in electromagnetism is a vector other than those involved in the movement of a test charge.

When a physicist tries to avoid the damage resulting from the misuse of vectors, she must impose additional conditions to hide the damage.  The misuse combined with the additional conditions become a hybrid system of mathematics. This hybrid system is then handicapped severely by the need for lucky coincidence in the race for expression.

I have a manuscript that expounds on a 25 point argument along these lines.


Michael J. Burns

Profile
 
 
Posted: 13 August 2008 06:56 PM   [ Ignore ]   [ # 8 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

It’s perfectly fine to use vectors when metric is flat, where they are equivalent to 1-forms.  And you only need tensors if you have multilinear maps greater than rank 2; otherwise, just use matrices.  So this works well in simple classical mechanics and quantum mechanics.

When you have multilinear maps greater than rank 2 and/or the metric stops being flat, it pays to think in terms of tensors.  And when you lose commutivity (as in non-abelian gauge bundles) or are doing volume integrals in curved space, forms are the order of the day.

Also, I disagree with your math:  1-forms do have covariant rank 1, but not dimensionality 1/L—that is a bookkeeping trick which causes problems elsewhere.  A cross-product the way you are using it only exists in flat 3 dimensions—in tensors the proper operation is the exterior product, which is often elegantly expressed using Hodge duals.

Profile
 
 
Posted: 13 August 2008 07:16 PM   [ Ignore ]   [ # 9 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

In short, there’s no problem as long as you understand your assumptions and are willing to question them.

Profile
 
 
Posted: 16 August 2008 10:47 PM   [ Ignore ]   [ # 10 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

Vectors are not equivalent to 1-forms even in a flat metric.  They are diagrammed differently as befits their different units of measure.  A corollary of the principle of general covariance is that such diagrams can be relied on to represent the physics when the correct tensor rank is used, but not when the ranks are confused.

Planck’s principle is correct in allowing the reduction of other units of measure to units of length; this is not a book keeping trick. Trouble is caused by using the wrong tensor rank, or just plain bad theory, and not by referring to the implied power of length.

For instance, when the scale of length changes, the numerical measure of vector and 1-forms change in opposite manner.  This behavior is denoted by the names contravariant and covariant, and the behavior is caused by the units of measure, L and 1/L.

The conventional cross product, when incorrectly represented as a vector, is only defensible in 3-space.  But the bivectors I refer to are perfectly general.

Taking a dual is an operation which makes a different object, not to be conflated with the original.

Complacency is not called for.  Unremarked convention is the least reliable source of truth, human perversity being what it is.


Michael J.Burns

Profile
 
 
Posted: 17 August 2008 03:36 AM   [ Ignore ]   [ # 11 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

mburns,

As for convention, I agree that if “unremarked” it is indeed complacent.  But this doesn’t happen—you’re harping about a problem that doesn’t exist.  Maybe you’re thinking of the use of vectors in Newtonian mechanics and Maxwell theory.  Well, vectors themselves were hardly developed by the time of Maxwell; general relativity helped to inspire differential geometry.

And I still don’t understand your math; I work with these objects every day, and I don’t think you know what you’re talking about.  And I’ve never heard of “Planck’s principle.” Perhaps I am simply not getting your semantics—it might be useful for you to work out of a standard text (for example) so we have our terms straight.

Profile
 
 
Posted: 22 August 2008 08:23 AM   [ Ignore ]   [ # 12 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

It is not seemly to say that a problem does not exist.  It was a cultural calamity when Richard Feynman misunderstood electromagnetism, as related in his “Lectures”, because he was using vector notation to attempt intuitive insight.  A vector representation of the E and B fields does not transform properly to observers in different motion.  So he concluded falsely that electromagnetism was not very understandable, and further that physics was not understandable in the main.

By contrast, the Faraday tensor representation of electromagnetism does transform properly when allowed its proper rank and units.  Its diagram is rock steady under transform between observers in different motion.  And viewing the appropriate diagram of the Faraday tensor contributes to the understanding of electromagnetism in ways that are indispensable.

The use of vectors strongly implies behavior that does not exist in electromagnetism.  A 2-form representation with geometric units of L^-2 does not.  Informed students might know about the misbehavior of vectors, but it is only obscurantism when even the most brilliant are thereby mislead about the nature of physics.


Michael J. Burns

Profile
 
 
Posted: 22 August 2008 09:09 AM   [ Ignore ]   [ # 13 ]
Jr. Member
Avatar
RankRankRankRank
Total Posts:  84
Joined  2008-06-30

A vector representation of the E and B fields does not transform properly to observers in different motion.

This is wrong—anyone doing a Lorentz transformation would also include the potential fields (i.e., the time components), and you get the right answer.  See any textbook for a second course in electromagnetism—not a tensor in sight, and still get the correct results.

I read Feynman’s Lectures and I don’t recall any mistake he made along these lines.  Which lecture?  I’ll have to look after it when I get my text out of storage.

Profile
 
 
Posted: 29 August 2008 11:17 AM   [ Ignore ]   [ # 14 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

I am sorry for the delay.  I was apprehensive over continuing.

The standard texts, I can only put it bluntly, resort to compensating errors to deal with the problems raised by vectors.  The standard advanced textbook on electromagnetism is the most egregious.  (I hear that proper geometric units are disregarded to the point of disaster in the exposition of transverse red shift.)  One ought to be reminded of the old orthodoxy of epicycles, a hybrid system of mathematics that was hypocritically applied and irrationally enforced.  But “GRAVITATION”, if only in Chapter 4, shows the way - geometric objects.

To start picking up old threads:  The existence of a particular symmetry somewhere in metaphysics is guaranteed by Spinoza’s principle of non censorship.  But its persistence requires a theorem of conservation or causality.  And if the consequences of a symmetry are not global, then the premises for the necessary supporting theorem can change in some circumstance without further logical difficulty.

Max Planck was indeed responsible for the idea of using geometric units, measuring things in terms of a power of length.  I call this a principle because it is a key idea that rates as almost a truism.

Section 2.2 in “GRAVITATION” discusses the principle of general covariance but it is very inadequate.  In fact “GRAVITATION” makes a fundamental blunder by deprecating and failing to follow all of the corollary forms of general covariance.  And academic departments are in high rebellion against the principle; an outbreak of anti-intellectualism is at the root of this.  But no good comes from rejecting an idea that is practically a truism, that the best practices and results of geometry should apply wherever the notion of geometric objects does apply.  And recall that there is no cosmic censor to stop geometric objects from existing where they can.  All of the necessary theorems for conservation and causality follow from the simple premise of the existence of a metric.

Richard Feynman expounded incorrectly on electromagnetism in his Volume II on pages 1-10 and 20-10 in the second paragraphs, denying the possibility of a consistent geometric model.  He drew improper conclusions from this on page 2-1, saying that physical understanding is difficult and can not be mathematical.  This disruption in understanding is caused by his use of vectors beyond their range of expression.  In the distant future when this is all resolved, students will be admonished for using the wrong tensor ranks.


Michael J. Burns

[ Edited: 02 September 2008 10:01 AM by mburns ]
Profile
 
 
Posted: 02 September 2008 09:47 AM   [ Ignore ]   [ # 15 ]
Jr. Member
Avatar
RankRankRank
Total Posts:  67
Joined  2008-07-28

You might ask what does it matter if a tensor rank is changed by academic convention.  But the underlying geometric object is changed.  The change ruins the scale diagrams that could otherwise be made to understand and verify calculations.  Different units of measurement are implied.  Scaling transforms and relativistic transforms become arbitrary and not understandable.

Bianchi identities determine the behavior of the original geometric object.  “The boundary of a boundary is zero.” is the fundamental theorem of this kind of physics.  But when the tensor rank is changed, then the original Bianchi identities lose their force of logic on the new object.  And the idea seems arbitrary and not important enough to be taught explicitly in the common course.

How does it happen that academic physicists seem to need several hours of tutoring to understand these issues?


Michael J. Burns

Profile
 
 
   
1 of 2
1
 
     Introduce Yourself ››