I certainly encourage everyone to evaluate the evidence for themselves. But there is a problem in that most people aren’t trained to do so (not saying you, since I have no idea). If a study is published and the abstract says “Acupuncture worked,” most people will stop there and accept the conclusion. At best, they may find both positive and negative studies and say “There’s a controversy and we don’t know yet if it works.” However, this would be the result of a cursory search of the literature for every therapy out there, from homeopathy and prayer at one end of the quackery scale to perfectly mainstream therapies at the other. Critical appraisal of the literature according to the principles and techniques of evidence-based medicine requires training. Heck, I’m spending 3 years completing a master’s degree in epidemiology primarily so I can understand the strengths and weaknesses of research studies in terms of their specific design, execution, and statistical analysis.
So when you characterize Dr. Novella’s analysis as “proficient at pointing out biases that may be in play in the interpretation by others of research they have conducted,” you are essentially correct, but you seem to be interpreting that as a form of bias or blind negativity rather than as a specific skill set and knowledge base necessary for interpreting the scientific literature. It’s not that Dr. Novella doesn’t have biases, of course. We all do. However, the purpose of controlled research is to compensate for very specific biases (e.g. selection bias, recall bias, regression to the mean, Hawthorne Effect, etc. ad nauseum) with specific methodological tools (e.g. randomization, prospective design, control groups, etc). Bias is always present, it’s simply a question of evaluating how effectively a specific study, or a body of literature, controls for it. The general public can rarely appreciate the complexities of this, and in my experience neither can most clinicians and even all researchers.
So we’re stuck either having to develop extensive expertise in every field we wish to evaluate for ourselves, or to decide whose expertise to rely on. This can, of course, lead to the argument from authority fallacy, but it isn’t identical. If we accept a claim solely on the basis of their authority as an expert, we are committing the fallacy. If we judge some sources of information as more credible than others on the basis of what we know about the expertise of the sources, then we are making sensible use of experts in lieu of trying to be experts in everything ourselves. The reality is that Google University is a two-edges sword, and in my own limited areas of expertise I find it frequently very misleading to the public and my colleagues. So I do think a case can be made for trying filter the information by identifying the relative reliability of various sources. This does, of course, introduce another source of bias, but it seems better than the bias inherent in assuming we are all capable of independently evaluating the evidence in every possible field where there is a dispute.
As far as acupuncture goes, it is a particularly complex area because of the volume of studies, the intensity of the placebo effects it can generate, the difficulty in properly blinding clinical trials, and the tremendous heterogeneity in the quality of the literature. I have spent several years compiling and shaping my own understanding of the literature, and FWIW my conclusions are pretty close to Dr. Novella’s”
1. The traditional theories behind acupuncture (Ch’i, Yin/Yang, meridians, etc) are mystical nonsense.
2. The “scientific theories behind acupuncture (trigger points, endorphins, etc) are plausible but inconsistently and not convincingly demonstrated.
3. Sticking needles in people has measurable physiological effects
4. Sticking needles in people has a significant placebo effect (defined as a positive impact on the person’s perception of their symptoms WITHOUT any detectable improvement in objective measures of their health status that are not affected by belief and expectancy)
5. The same physiological and placebo effects can be generated by pretending to stick needles into people, poking them with toothpicks, banging their toe with a hammer, and lots of other mildly noxious stimuli.
6. There is no consistent, high-quality evidence that acupuncture has any meaningful impact on objective measures of health.
7. There is some evidence that it has small effects on subjective symptoms such as pain and nausea, but this effect is of questionable clinical relevance and only occasionally appears to be any greater in “real” acupuncture compared to “fake” acupuncture.
8. The risks are low but minor side effects are common and serious side effects, including death, are occasionally seen.
So overall, an enormously complex literature developed over many decades appears not to be able to demonstrate any consistent, meaningful benefits other than a strong placebo effect. Is this worth employing? Well, there are ethical questions about telling people something is effective just to get them to feel better through a placebo effect, but there might be situations in which it would be appropriate and worthwhile (not, however, in my field where the placebo effect works on the owners to a much greater extent than on the patients). Is this small benefit worth the small risk? How much of our finite healthcare resources does a therapy like this, and further research on it, deserve? I don’t think the problem is one of inadequate research. There is great variation in the quality of studies, but there is a correlation with quality and outcome such that higher quality, better controlled studies are less likely to show a positive effect. That strongly suggests there is no effect beyond placebo.
Anyway, I don’t discourage you from having your own opinions or disagreeing with authority figures. That’s a huge part of what CFI is about. I just thought my perspective might be useful given the time and energy I’ve sunk into this area.