Yes! Yes! Ye- Oh, Damn: The 10% Fail

February 6, 2017

About once or twice a month (though sometimes once or twice a week, depending on how much I'm reading at the time), I come across an article or blog that makes some important point that I agree with. Maybe it's about the need for skepticism, or about politics, or anything else. I'm reading along, nodding in approval in paragraph after paragraph (or assertion after assertion), pleased at thinking about those it might educate.

And, just as my finger is reaching to share or like the post, I wince. The writer or commenter stumbles, making a gaffe or mistake that I can't in good conscience implicitly endorse. It's frustrating because I agree with the overall point, and think the comment or piece merits a wider audience.

It's like some well-intentioned skeptic writing a piece about why the evidence for Bigfoot (or recovered memories, or alien visitation) is poor, and giving two solid, accurate reasons--followed by a third which is flat-out wrong, or an argument whose premise is embarrassingly flawed. This happens regularly enough that I've taken to describing it (to myself anyway) as The 10% Fail. Ninety percent of it is on target, but the last ten percent undermines the author's credibility in some way. This issue is a common lament among professional skeptics: a well-meaning but inexperienced skeptic goes on television or gives an interview-ostensibly representing organized skepticism--in which he or she misspeaks or mangles some salient fact in the process of debunking some bogus claim, and that error is then seized upon by opponents as proof that skeptics (writ large) don't know what they're talking about.

Whether that bad 10% is enough to contaminate the rest of the person's opinion or blog is of course a subjective question that varies from person to person. In today's world of fake news, "alternative facts," and other misinformation, I take a dim view of it. For me it's very often a deal killer because I can probably find and share someone else's viewpoint or post on a similar topic that doesn't contain the error. In this way--ideally at least--diligent journalism and well-considered commentary rises to the top and is shared and rewarded, while poor fact-checking and sloppy thinking remains unseen. In the real world, of course, there are far more salient factors that make a post go viral, including how much a person agrees with the view expressed in it (regardless of facts).

I don't mean to suggest that any news story or point of view which is not completely supported by hard evidence and airtight logic shouldn't be considered or shared; we're all human and everyone makes mistakes. Imprecision in some minor details is often a necessary part of journalism. For example if a half-dozen tornadoes hit northern Texas but one of them briefly crosses the state line into Oklahoma (doing little or no damage), it's okay for a journalist to generalize for the sake of clarity and brevity that the events happened in northern Texas. Though perhaps technically not completely true, it's close enough (and the part that's not is not significant enough to undermine the larger point of the piece).

People are understandably reluctant to point out errors in their friends' and colleagues' work (and more generally in points they agree with), but a willingness to do so an important part of independent, critical thinking. Skeptics and scientists reject dogmatism for exactly that reason, and understand that offering constructive criticism is a sign of respect, not personal grievance. The goal is to achieve a better understanding of facts and truth. Pointing out and acknowledging errors (by qualifying or removing them, for example) strengthens arguments.

In future columns I will highlight a series of these "10% Fails," taken from a variety of sources and contexts. To be clear: I generally agree with the larger points being made by the authors in the pieces I quote, and I highlight their errors with the expectation that the mistakes in them have likely been overlooked. I hope it will encourage readers to think more deeply and critically about the information they see and share--especially when it confirms their point of view. It's much easier and more intuitive to find errors and identify faulty logic and unsound assumptions in positions we disagree with than in those we approve of. Confirmation bias is one of the most difficult psychological errors to detect, and the examples in this may help identify and correct it.


The Mermaids and Myths

I recently came across a Slate blog decrying a fake documentary about finding scientific evidence for mermaids. I'd researched and written about this topic, discussed it on the MonsterTalk podcast, and was even quoted in a CBS News story.

I wrote, "Though the filmmakers acknowledged that the film is science fiction, for many people it was indeed ‘wildly convincing.' The show was an X-Files type fanciful mix of state-of-the-art computer generated animation, historical fact, conspiracy theory, and real and faked footage sprinkled with enough bits of scientific speculation and real science to make it seem plausible. In fact there were even interviews with real NOAA scientists. As with all good science fiction, there's a grain of science and truth to it: the so-called ‘aquatic ape' idea it touted (suggesting our evolutionary ancestors may have lived in marine environments) is a real hypothesis, but has nothing to do with mermaids.

With a sly wink, Mermaids: The Body Found presented an entirely fictional story in fake-documentary format for added plausibility. There's a reason why so many horror films (especially supernatural-themed ones) claim to be based in fact or ‘on a true story,' when they're not: it adds realism and interest. The program posed scientifically nonsensical questions like, ‘If massive whales haven't been discovered until recently, it answers why we haven't been able to detect mermaids yet?' (Answer: Whales have been studied for many decades and are not a ‘recent discovery;' the fact that genetic testing has revealed new subspecies of whales says nothing about why completely unknown mythical animals like mermaids have never been discovered.)"

The Slate piece on the same topic was titled "The Politics of Fake Documentaries" and had the slugline "Mermaids: The Body Found and its ilk have done long-term damage." It was by someone who'd "chaired a session on the impact of fake documentaries on public understanding of science," and proceeded to (quite rightly) criticize two Animal Planet fake documentary specials, Mermaids: The Body Found and Mermaids: The New Evidence, that aired in 2012 and 2013.

I was totally and completely with the author as he railed against the misleading shows, complained that the disclaimers were virtually invisible, and so on. Then, like the movie trope when a needle scratch interrupts a soothing melody as an auditory cue that something suddenly went amiss, I read this:

"And the bold and outright fabrications of shows like Mermaids erodes the public's trust in government and scientific organizations.... By calling into question the motives and methods of the National Oceanographic and Atmospheric Administration, an organization responsible for studying the effects of climate change on the United States' coasts, Discovery provided validation for this anti-science movement and created an ecosystem ripe for exploitation by the merchants of doubt committed to undermining scientific consensus.... lasting damage to the public's trust in science has already been dealt."

Erm. I was not aware of any antiscience organizations that had cited the Mermaids TV show as a way of questioning NOAA's "motives and methods." It's certainly possible, but I knew of no evidence for it. Was it really true that these shows caused "lasting damage to the public's trust in science"? Even for a skeptic this seemed to greatly overstate the case. Was this pure speculation, or based on some evidence?

Given that the Slate piece was written just two years after the pseudodocumentary aired, how could the author--or anyone else--determine that "lasting damage" had indeed been done to the public trust in science? Perhaps that could be known ten or twenty years later, but is a few years really enough time to tell? Could we compare polls of public trust in science before and after the shows aired and see a significant drop plausibly attributable to those programs specifically?

In short, where's the evidence?

I tried to contact the author, Andrew David Thaler of the Southern Fried Science website, to ask him what information he had about the lasting damage, so that I could cite and credit him the next time I wrote about the topic. That's when I discovered that Thaler had blocked me on Twitter. I then realized why Thaler's name sounded vaguely familiar: On March 3, 2016, I had seen a tweet he sent in which he seemed to defend a widely criticized study, asking why people "were freaking out" over it when there are "millions of publications with equally tenuous introductions & discussions." I replied in earnest that his statement "Sounds like a ‘tu quoque' logical fallacy to me: Low standards by others doesn't justify low standards in a given case." It was not meant as a personal attack (I'd never met or interacted with him before, though I had quoted him as an expert in a news story about the Loch Ness monster a year or two earlier, I later realized), but instead just a note that he'd offered what seemed to be a faulty argument. I expected an explanation of why he had not committed a fallacy, or perhaps some point I overlooked, but I did not expect his response: "Well, you seem fun, but you've been a complete monster to my friends for years, so I'ma gonna go ahead and block you."

I had to read it several times to be sure I hadn't misread his message or intent, but Thaler had in fact completely ignored my (polite) suggestion that he'd made a logical error, and instead blocked me--because he said I'd been mean to his (unnamed) friends. To this day I have no idea what he meant, or why his friends might have been offended by something I wrote or did, but I realized that he would probably not be interested in responding to any other criticism I might have of his articles or blogs, however well founded.

In any event I hope that Thaler (and anyone else) continues the fight against scientific misinformation masquerading as entertainment, as it's an important topic--and one that deserves well-grounded, fact-based arguments and rebuttals.

 

Comments:

#1 Karen (Guest) on Monday February 06, 2017 at 12:51pm

“The show was an X-Files type fanciful mix of state-of-the-art computer generated animation, historical fact, conspiracy theory, and real and faked footage sprinkled with enough bits of scientific speculation and real science to make it seem plausible.”

I take issue with the “state of the art” CGI; I, for one, thought it was terrible and obviously fake footage.

But I do admit that the first time I ever watched the Mermaids mockumentary I thought it would make a fantastic X-files episode. X-D

#2 ChrisJ on Friday February 10, 2017 at 7:21am

The point of this essay is of such importance that I think it deserves to be published more widely and prominently.  Maybe consider running a form of it in Skeptical Inquirer as well.  It really is a critical point that one’s entire slate of arguments must be carefully checked—three solid points are worth way more than a few good ones and a couple of low-quality ones.  Quantity does not beat quality.  The shotgun approach should be left to woo peddlers.  We all suffer credibility erosion from a handful of people who can’t help padding the length of an argument with a weak point or two, and this point is well made in this essay.  This is just too big of a thing to not get it in front of as many eyes as possible.

#3 Benjamin Radford on Friday February 10, 2017 at 8:29am

Thanks, ChrisJ—I’ll see about adapting it for Skeptical Inquirer!

Commenting is not available in this weblog entry.