Close Mobile Menu
Back
Benefit of the Week

The Graduate Wine Collective

Wines crafted by UC Berkeley alumni.
Join the Club
Back
Upcoming Event

Cal for All: Advancing Sustainable and Equitable Futures

Creating lasting change requires collaboration across industries, communities, and generations. Dr. Yvette Gullatt ’88, M.A. ’94, Ph.D. ’05  will discuss ways innovation, sustainability, and equity can drive a better future.

RSVP

Bad Science

September 16, 2009
by Erik Vance

Spotting the Darwins in a field of Lamarcks

Every day scientists announce frontier discoveries or breakthroughs. Some signal new technologies, others life-improving or life-saving medicines. Most never pan out.

In March of 1989, a pair of chemists announced a discovery that essentially took a sandblaster to modern physics. But rather than publish in the journal Nature, Stanley Pons and Martin Fleischmann called a press conference and excitedly announced that, using a jar of lukewarm water and a simple diode, they could create the same nuclear fusion that fuels the core of the sun.

About all that little jar of water generated was a string of bad Hollywood movies. Instead of the energy of the future, cold fusion became the biggest scientific bust since 1835, when the New York Sun reported bat people living on the moon.

Still the parade of bad science and science reporting—ranging from small studies with flawed logic to outright fraud—has continued.

Some science mistakes are malicious, but most are just the result of risk-taking. Much of what Freud said was eventually disproved. Ptolemy was overturned by Copernicus and Newton, who were overturned by Einstein (who was wrong in criticizing quantum physics). Today they are remembered as much for their groundbreaking bad science as they are for their good stuff. But how is even the best-informed reader supposed to tell the good science from the fluff?

Like buying a dog, pedigree matters
A common romantic figure in many science stories is the social outcast, the lone researcher, spurned by colleagues, driven by the pursuit of a paradigm-shifting discovery always just out of reach.

“Lone scientists are lone scientists for a reason,” says Gary Taubes, a journalist and scientific skeptic with Science magazine who wrote a 1993 book about cold fusion. “And that reason is that they are wrong.” With some irony, Taubes, a leading challenger of the fat-is-bad-for-you argument, points out that his newest book on the subject is a perfect example. “The history of science tells us that I’m a quack,” he admits. “It’s an interesting position to be in.”

The pedigree rule means careful science readers also need to look at where the announcement appears. Recently two Berkeley economics graduate students, Saurabh Bhargava and Vikram Pathania, declared that talking on the phone while driving does not statistically cause crashes (contrary to over 125 previous studies). Even ignoring their limited pedigree as students and the weight of science against them, a quick read reveals that the study was “published” on the libertarian American Enterprise Institute website.

Today, many questionable discoveries come out in conferences, press junkets, or online stories, instead of peer-reviewed journals. Peer review means that a paper goes through months of grueling edits by anonymous experts. They rip apart the experiment’s design, results, and analysis. They may even send the researcher back to the lab for more experiments. The better-known the journal, the tougher the gauntlet.

That’s not to say that the big journals always get it right. Even they are vulnerable to outright fraud. Seoul National University’s Woo-Suk Hwang pulled the leg of the best in the country. The tobacco industry published dozens of articles questioning the effects of second-hand smoke in excellent journals.

Losing the numbers game
Whether studying atomic energy or whale mating, scientists need a significant enough number of examples to be sure the results aren’t just coincidence. How many varies depending on the author’s goals and how the numbers will be analyzed.

Examples of skimpy data show up in the newspaper every day. A recent study in the journal BMC Nursing said that young women are less susceptible to abuse if they bring a friend on a date or date in their circle of friends. Not exactly earth-shattering, but the author laid out a few parameters for safe dating based on the interviews. Read the story closely and you will notice it was based on just 22 young women.

It’s not really fair to call this bad science. It was a qualitative paper that aspired to describe something, rather than the quantitative work that most laypeople associate with science, which aspires to determine an answer. Also, scientists are prone to overstating the impacts of their results while reporters are prone to not checking sample size and to misunderstanding the role of a small study. According to Taubes, this is especially true in some social sciences. He says, half-kidding, “You can’t trust anything that comes out of sociology, psychology, chronic disease, epidemiology,” or, “anything based on a few interviews.”

With bigger announcements, the key is “repeatability.” Twelve years after the cold fusion commotion, a scientist at Oak Ridge National Laboratory published a paper in the journal Science saying he had created fusion in tiny acetone bubbles. The prestigious researcher and revered journal seem to validate the claim. Yet years later, when no one else had repeated the experiment, so-called bubble fusion was widely discredited, with some claiming deliberate fraud.

No such thing as science in a vacuum
Many writers say that the less important a discovery is, the more likely it’s true. That’s a nice way of saying, “Watch out for the effects of money and politics.”

Charles Petit, a veteran science writer and blogger, reads thousands of science stories a year and says that while manipulation of science makes headlines (think big tobacco or human cloning), it’s not common. He says a good writer always checks the money behind a study; findings from a national panel like the EPA’s Science Advisory Board or a publicly funded study tend to be more trustworthy.

But even a strong statistical relationship does not show that one factor causes another or the mechanism by which it would happen. For example, a recent flood of stories claimed scientists had proven political orientation is hard-wired in the brain. College students were asked to play a simple computer game and the conservatives were consistently less flexible than the liberals. Most stories failed to mention that the scientists didn’t account for gender differences or question how a couple hundred students could represent America.

Relax, that’s the way it’s supposed to go
In the end, science is a process of trial and error. When generous, Petit calls science “a democracy” (when frustrated, he calls scientists a bunch of “dissembling, excuse-making flip-floppers”). Taubes says 90 percent of textbook content is true, but 90 percent of science news probably isn’t—because it hasn’t yet been vetted by the science community. Berkeley professor emeritus of journalism and bestselling science writer Tim Ferris says that at a certain point, those who want to stay on the cutting edge of science news have to get used to disappointment. Daily journalists are under a great deal of time pressure and don’t always understand what they are writing about well enough to be skeptical. For more accuracy, Ferris says, read books or weekly magazines. He also recommends following experienced reporters like Petit or 47-year veteran David Perlman of the San Francisco Chronicle.

“A trustworthy journalist is like a trustworthy clothing salesman,” says Ferris. “A bad salesman will sell you anything that you are willing to buy. A good salesman knows you are coming back and will say, ‘You know, that doesn’t fit very well.'”

See more brilliant California ideas

Share this article