THE MIND OF SCIENCE
The sciences can be classified by the scale and type of their subjects. First
comes physics, which studies the most fundamental levels of matter such as
atoms and subatomic particles. Then comes chemistry, which looks at how those
particles assemble themselves into molecules and interact with each other. These
sciences are called the physical sciences—or “pure” sciences or “hard” sciences
—because they measure the cold, hard objective fact of physical matter. They’re
based on mathematics rather than the squishy unpredictability of living things.
The scale of the universe mapped to the branches of science, with physical sciences as the foundation.
Biology and the other life sciences build on physics and chemistry to study
living cells, tissues, and organisms. These interact in complex systems that are
often unstable and evolving in unpredictable directions. Geology and astronomy
also study solid physical matter. Geology examines the composition of the
planet. Astronomy scales up by a large order of magnitude to look at the material
structure and motion of stars, galaxies, and the universe.
Then come the “soft” sciences of the mind. Psychology examines individual
behavior, while sociology studies the interactions of groups. Those in the hard
sciences often feel superior to those in the soft sciences because they deal with
the level of matter rather than that of mind. Physicist Ernest Rutherford, who in
1907 discovered that the atom was mostly empty space and that subatomic
particles are bound together by electromagnetic fields, had a low opinion of the
other sciences, sniffing contemptuously, “In science there is only physics. All
the rest is stamp collecting.”
THE REPLICATION CRISIS
When they publish their papers, scientists are required to provide a “methods”
section. This outlines how the experiment was set up and does this so clearly that
other scientists can run the same experiment in an attempt to replicate the
previous study’s findings.
A discovery published in a single paper may represent an actual effect. But
when an independent research team comes up with the same result, it’s likely
when an independent research team comes up with the same result, it’s likely
that the effect found in the first study is real. For this reason, replication studies
are important in science.
So much so that before it approves a new drug, the U.S. Food and Drug
Administration (FDA) requires two studies demonstrating the drug’s efficacy.
When formulating standards for “empirically validated therapies,” the American
Psychological Association borrowed the same standard, requiring a replication
of a study before declaring the therapy evidence based (Chambless & Hollon,
1998).
In the early 2000s, a giant biotech company, Amgen, set out to replicate some
important studies. The company was pouring millions of dollars into research on
cancer biology based on earlier research. If the effects found in the original
studies were robust, then the next stage of development of cancer drugs would
be built on solid ground. They asked their scientists which previous studies were
most important to their work and came up with 53 “landmark” studies.
In 10 years of work, Amgen was able to replicate only 6 of the 53 studies. The
researchers called this “a shocking result” (Begley & Ellis, 2012).
A few months earlier, another giant pharma company, Bayer, had published a
similar analysis. This led to a sustained effort to determine how many key
studies were replicable. An attempt to replicate five cancer biology trials was
successful for only two (eLife, 2017). Epidemiologist John Ioannidis of Stanford
University summarized the findings by saying, “The composite picture is, there
is a reproducibility problem” (Kaiser, 2017).
What about the soft sciences? An international group of 270 researchers set
out to replicate 100 studies published in 2008 in three top psychology journals.
They found that they were able to replicate fewer than half of them (Open
Science Collaboration, 2015).
The journal
Nature
conducted a survey of 1,576 researchers to identify their
experiences with replication. It found that over 70 percent of them had failed
when attempting to reproduce another scientist’s research findings. Over half
could not even replicate their own research (Baker, 2016).
There are many roots to the “reproducibility crisis” in science. A variety of
factors stand in the way of successful replications. Among them are haphazard
laboratory management, sample sizes too small to provide a high degree of
statistical power, and the use of specialized techniques that are uniquely hard to
repeat.
Selective reporting plays a big role, too, as positive results are usually
reported while negative ones are swept under the rug. These are called file
drawer studies because, metaphorically, they are thrown into the bottom drawer
of a filing cabinet, never to see the light of day. An analysis of psychology
of a filing cabinet, never to see the light of day. An analysis of psychology
studies estimates that 50 percent are never published (Cooper, DeNeve, &
Charlton, 1997).
Another factor making studies hard to replicate is that beliefs can influence
the results. Scientists have beliefs. They’re human. They are not godlike
intellects immune from glory seeking, egotism, jealousy, and territorialism. They
have whims, preferences, and needs. They need successful research to obtain
grants, jobs, and tenure. They fall in love with their work, the “Pygmalion
effect” immortalized in the musical
My Fair Lady.
Scientists approach their
work with as many presuppositions as any other demographic group has.
Scientists believe in what they’re doing and look for effects they expect to
find. The strength of their beliefs may skew their results, a phenomenon called
the expectancy effect. To control for this, most medical research is carried out
blind. The statisticians analyzing two groups of data don’t know which sample is
from the experimental and which from the control group.
Do'stlaringiz bilan baham: |