Skip to content

Is research ever value-free?

Considering the ethical implications of the scientific pursuit

Though many students at last fall’s General Assembly voted in favour of condemning military research on campus, others spoke out to oppose the motion. In an article covering the event, The Daily quoted U3 Engineering student Adam Cytrynbaum, who argued that military-funded research was not necessarily objectionable.

“Military research is done to better the people of Canada and the United States,” Cytrynbaum said. “Research is independent of what it is used for.”

The notion articulated by Cytrynbaum – that technology is independent of its end use – is one that is often repeated, albeit in varied forms. Joe Schwarcz, McGill chemistry professor and director of the Office for Science and Society, has a similar opinion with regard to chemicals: “They are inanimate; they don’t make decisions. People make decisions,” he said in an interview. “The same chemical that can be used for mankind’s benefit can be used for mankind’s detriment.”

Robert Proctor, history of science professor at Stanford University and author of Value-free Science? finds, however, that there is not always a clear-cut distinction between the design of a given technology and its inevitable application.

“Part of the truth is that anything may be used or abused, but in very complex systems; oftentimes products are very end-specific…. How do you abuse a cruise missile? How do you abuse an atom bomb? Well, an atom bomb can only be used one way,” Proctor said.

But what about breaking it down to a chemical level and the substances used to make that bomb? According to Schwarcz, “You can use the ingredients in a nuclear reaction to produce an atom bomb or to produce a power station and generate electricity. [The effect] all depends on how you use it.”

In his book, Proctor disagrees. “This supposed neutrality describes only the simplest technologies, the most abstract principles. The seven simple machines, perhaps, or the rules of arithmetic, may be neutral in this sense. But an abstract truth often conceals a concrete lie,” he writes. “‘Guns don’t kill people, people kill people.’ Yet is it surprising that a society that surrounds itself with guns will use them?”

Use and abuse debates may sound inconsequential at first, but the ethical implications are great. For one thing, if it is true that research itself is neither good nor bad and that only the application of technology matters, scientists can be absolved of responsibility for how their research is used. On the other hand, if intent is built into technologies, then researchers themselves are directly responsible for the consequences.

Science and/or activism
The concept of neutral technology stems from the fact that science is widely represented as a purely objective discipline, where students are taught that the truth can only be discovered through impartial hypotheses and an unbiased attitude. For this reason, many scientists fear the intrusion of partisanship in any scientific pursuit.

McGill professor David Green has extensive experience in conservation biology, a field populated by both activists and scientists. He is the director of the Redpath Museum and former Chair of the Committee on the Status of Endangered Wildlife in Canada (COSEWIC). According to Green, COSEWIC – an organization that has a mandate to give an impartial assessment of how species are doing – owes its success to its impartiality.

“I’ve come to realize that although the scientists on COSEWIC all think in terms of probability values and values of p and assessment lines and graphs and equations… when we send that off to government ministers and managers, the thing that carries the weight is that everyone on COSEWIC agrees with it. It has got that weight of consensual opinion behind the assessment, and that’s what makes sense to politicians,” Green said.

Although he agreed that science is imbued in social and cultural values, Green argued that there is something to be said for preserving objective science.

“There’s the question of integrity and believability that must be maintained,” Green said. “Reputation takes a long time to accumulate; it doesn’t take very long to destroy. We want science to be trusted in society; if science isn’t trustworthy in society, then there are lots of other people who would love to stand up and say, ‘Well trust us instead.’”

Proctor, however, maintains that it is possible for scientists to wear their politics on their sleeve and still be taken seriously. “Advocates are often the most ‘objective,’ meaning often the most probing, most impassioned, most able to ferret out the truth,” he said. “That’s part of the myth of value-free science – this idea that if you just sit back and observe, you will land upon great truths. Generally speaking, you will not. The great search requires engagement, commitment, passion, including passion for the truth.”

Research as a social process
Researchers make choices about what research takes place, but they in turn are influenced by the social and cultural priorities that fund their work.

Green points to the idea of natural selection, discovered at the same time by both Alfred Russel Wallace and Charles Darwin, as evidence of social influence. “It is not just a product of their work out in the field; I mean, why were they out in the field anyway except that it was a socially and culturally acceptable thing to do at the time?” Green said.

Proctor agreed, adding that society also has the power to determine what research should and should not take place. “I think we all have a stake in what kinds of research get done. Research priorities are expressions of social priorities,” he said. “And what a society thinks is important will shape what kind of science gets done.”