A colleague of mine had some comments on the paper which I think are useful. Apart from the problem of multiple post-hoc comparisons, it doesn’t seem there is any sound, a priori rationale for the particular analyses done. Rather, when the primary outcome showed no effect, many other possible ways of parsing the data were employed rather indiscriminately in order, apparently, to find at least some statistically significant result. Not an unusual problem in scientific papers, unfortunately.
Short-term variations in the Earth’s geomagnetic field are caused by processes in the upper atmosphere – the ionosphere and the magnetosphere. Movements of charged particles in these layers of the atmosphere create large electric currents, and those electric currents necessarily produce magnetic fields that summate with the geomagnetic field, resulting in small changes in both intensity and declination (the difference between magnetic North and true North) of the local magnetic field measured at a point on the Earth’s surface. These short-term variations occur over minutes to hours rather than seconds and are very small – which is why when you look at a compass (away from interfering human artifacts) you do not see the needle moving around.
As the authors state, the overall data in their paper show no evidence of North-South orientation (their ‘surrogate measure’ for detection of a magnetic field) at all.
The authors then go on to do a post-hoc analysis = ‘data dredging’. They state very clearly in the Methods section that is what they are doing, but presumably do not realise the statistical problems inherent in doing so. They divide the data up in ways that were not specified a priori, e.g., by time of day or intensity of magnetic field variation or rate of magnetic declination variation, etc. (The rate of change of declination should be expressed not in % but in % per unit time.) For each they did a statistical test to see if the confidence interval around the average vector overlapped the N-S orientation. Note that, after each such dividing up, the numbers of dogs in each category are quite small, mostly in the few tens, and so the confidence intervals are correspondingly wide. They do not state how many such tests they did, but it is clear that it was quite a few. Because they did a lot of tests, there is a reasonable chance that at least one would orient close to N-S such that its confidence interval overlapped the N-S orientation, and so be statistically significant. Indeed, one of those tests (the one for measurements made when the rate of variation of magnetic declination was 0%) came up statistically significant, i.e., the average vector was more-or-less N-S. They concluded that (1) dogs can detect the Earth’s magnetic field but (2) only when the local magnetic field is not showing (tiny) fluctuations in declination.
If one were to take the findings seriously then, strictly, results of such post-hoc analysis should be treated as ‘hypothesis generation’. Thus, this study has generated the hypothesis that dogs can detect the earth’s magnetic field but only when it is stable and not showing tiny fluctuations in declination, and somebody can now test that prediction (any takers, anyone?). It is not implausible that dogs could detect Earth’s magnetic field because other mammals, e.g., the subterranean mole rats, certainly can (although it is easy to see good reasons why detection of the geomagnetic field could be evolutionarily useful for animals that spend pretty much their entire life underground). However, given that the Earth’s magnetic field declination fluctuates only a very small amount and does so for a lot of the time it would be very peculiar if dogs had evolved to be able to detect it only when it is not undergoing those tiny fluctuations! It is also difficult to think of a reason why dogs should use any such ability to orient themselves N-S when defaecating…