Close Mobile Menu

The Case for Blind Analysis: In Research, What You Know Can Hurt You

October 15, 2015
by Glen Martin

Determining reality can be a confounding business. It’s hard to separate subjective sensory impressions, cultural imperatives, religious epiphanies, social mores, and gut feelings from what objectively is. No surprise, then, that many of us rely on scientists to tell us what’s what. And scientists, in turn, rely on the vetted and published results of significant research to both aid them in their own inquiries and derive an accurate sense of the cosmos and everything in it and beyond it.

But peer-reviewed research is by no means an infallible standard. It’s not meant to be: The laws of nature may be immutable, but our understanding of them is limited, and always subject to revision as research progresses. And that’s not the only issue. Science, being a human endeavor, can be compromised by human failings. Studies based on cooked data have (rarely) found their way into some of our most respected scientific journals. Corporate money increasingly is replacing public research funding. As noted recently in an op-ed in The New York Times, this massive influx of private money may be—gasp—influencing outcomes.

Perhaps an even bigger obstacle to separating the wheat of reality from the chaff of delusion and misapprehension is nothing so nefarious as fraud or Mammon. In a nutshell, people—even truth-seeking scientists—tend to give disproportionate weight to empirical data that support favored hypotheses. It’s not that they’re trying to pull a fast one, says UC Berkeley physicist and Nobel laureate Saul Perlmutter. Rather, they tend to examine their research for errors only when the results seem way off the expected base.

Ultimately, this “confirmation bias” results in conclusions that are impossible to replicate. And in science, if it can’t be replicated, it just ain’t real.

“Confirmation bias has been known for a long time, decades really, but the realization is growing that it’s affecting a significant percentage of research in many fields, including medicine, biology and the social sciences,” Perlmutter says. “It’s not intentional, but there are so many decisions that have to be made during the research process, and [those decisions] often tend to support the hypothesis. The results are published, and then they don’t hold up when other researchers try to replicate them.”

Perlmutter, credited with breakthrough findings in dark energy and the expansion of the universe, with Robert MacCoun, a former Cal professor of law and public policy who now teaches at Stanford, recently wrote a commentary for the journal Nature on confirmation bias and suggested a simple remedy: blind analysis. It means that the data and results of an experiment are hidden from the analyst until the procedure is complete. An oft-cited example: A psychologist who is diagnosing a patient is denied access to any earlier diagnoses until after completing his or her own evaluation.

Blind analysis, Perlmutter emphasizes, is not double-blind research. The latter, common in medical studies, involves keeping both researchers and subjects ignorant about who gets what during drug trials. In blind analysis (“Sometimes you hear it called ‘triple-blind analysis,’” Perlmutter muses), a computer or fellow researcher hides the details of the data or switches its values by an undisclosed degree until final computer analyses are conducted. There is no opportunity for confirmation bias because the researchers only gain access to the true values of the data at the end of the experiment or project. Without any interference from human yearnings, errors—even minor errors—are glaring, and beg redress.

Blind analysis already is pro forma in particle physics, says Perlmutter, and is gaining increasing use in cosmology. But why isn’t it already standard doctrine in all the sciences all the time? After all, its applications are often decidedly easy and low-tech; Perlmutter has observed they can be as simple as directing a colleague to randomize labels on test subjects.

“I can understand some of the hesitance,” Perlmutter acknowledges. “So much personal identity can be at stake with the conclusions of any ambitious research project. There’s also the pressure for results, the pressure to publish. Say you’re a grad student, you have to present a paper at an important conference, the job market is tight. Your analysis is not blinded, but that would take more time, and you really need to make that presentation. What are you going to do?”

So ultimately, he says, it gets down to changing the culture of science. That hypothetical grad student shouldn’t feel pressured to produce data that hasn’t been debugged and confirmed. The quality of the research must matter more than the speed with which it is completed.

“Science is never a finished product,” he says. “We’re much more capable of distinguishing reality from mistaken perceptions than we were even a few years ago. But as science advances, we also find new ways of fooling ourselves, so we also have to be willing to develop techniques to counter that. And I think there’s some cause for optimism. Some of the journals are requiring more rigorous methodologies. Little by little, I think the culture is starting to shift.”

Share this article