Scientific Misconduct: The Manipulation of Evidence for Political Advocacy in Health Care and Climate Policy
IN HIS BOOK The Problem of Knowledge, A. J. Ayer argues that the scientific approach to knowledge is valuable to the extent that “one does not merely insist that factual inferences from one level to another are legitimate but seriously tries to meet arguments that show that they are not.” The development of knowledge, from the approach of science, involves repeated tests of principles we believe might be true, careful review of the results and the data used to generate them, and evaluation of the weight of evidence supporting and opposing these ideas. The knowledge used for rational economic and scientific decision-making, as Friedrich Hayek noted, “never exists in concentrated or integrated form but solely as dispersed bits of incomplete and frequently contradictory knowledge.” Aggregation and evaluation of this knowledge is the critical function of science.
Scientific knowledge is a valuable but incomplete tool for the development of public policy. Its strengths and weaknesses both lie in the fact that the scientific process is based on only one value, which is the integrity of the process of generating knowledge. Science can answer whether a relationship exists between two factors that can be manipulated to create a different result, but it cannot answer the question of whether we ought to manipulate the relationship. The question of “ought” is not a matter of epistemology, but rather a decision of ethics and moral values. These types of questions are better settled by political debate that elucidates and develops a consensus of society on whether a course of action is in society’s best interest. Science can inform that process by providing an objective evaluation of the underlying reality, but it cannot replace consideration of moral values. Likewise, moral and ethical decisions made without reference to the underlying objective reality are severely handicapped.
Increasingly, however, science is being manipulated by those who try to use it to justify political choices based on their ethical preferences, and who are willing to act to suppress evidence of conflict between those preferences and the underlying reality. By undermining the processes of science through the selective inclusion or exclusion of data and arguments, advocates seek to frame a reality consistent with their personal ethical and moral framework that serves to persuade the polity that their preferred course of action is correct. This problem is not unique to government. Law professor Holly Doremus, in a 2007 article for the Texas Law Review, notes that scientists can move from “skeptical evaluation” into advocacy due to “employment by an entity with a financial stake” in a particular outcome and “other sorts of strong policy preferences.” This blurs the differences between scientific and policy debates, limits the role of science in policymaking, and, in the words of climate scientist Roger Pielke, turns scientists into “leading members of advocacy campaigns.”
This problem is clearly seen in two policy domains: health care and climate policy. In the area of health care, critics have long worried about the inordinate influence of pharmaceutical and medical device manufacturers on research to show the safety and viability of new products. Recent information, however, shows that government agencies may cause more problems in this area—a worrisome development considering legislation recently passed by the United States Senate would allow federal agencies to punish organizations whose researchers publish results that conflict with what the agency feels is appropriate. In climate policy, recent revelations of e-mails from the government-sponsored Climate Research Unit at the University of East Anglia reveal a pattern of data suppression, manipulation of results, and efforts to intimidate journal editors to suppress contradictory studies; these revelations indicate that scientific misconduct has taken place in an effort to manipulate a social consensus to support the researchers’ advocacy of addressing a problem that may or may not exist.
The influence of interested parties on health care research has been an issue of concern for at least two decades, focusing largely on the issue of whether sponsorship of pharmaceutical research and drug trials by manufacturers has led to manipulation of data and suppression of adverse results in order to support approval of new products. Sponsorship by manufacturers has been found to be associated with a reduced likelihood of the reporting of adverse results. Likewise, a significant link has been found between industry funding and the likelihood that results of a randomized trial will support a new therapy. These biases may taint the results of meta-analyses used to guide clinical practice. Also, the American Cancer Society has been accused of organizing a campaign of misleading attacks on the integrity of researchers who published a study, originally supported by the society, which presented results contradicting the society’s stance on the health impact of environmental tobacco smoke.
One proposed solution to this problem is to increase public funding for research on therapeutic effectiveness. Ironically, that may well aggravate the problem. In July 2007, AcademyHealth, a professional association of health services and health policy researchers, published results of a study of sponsor restrictions on the publication of research results. Surprisingly, the results reveal that more than three times as many researchers had experienced problems with government funders related to prior review, editing, approval, and dissemination of research results as had experienced such problems with private funders. In addition, a higher percentage of respondents had turned down government sponsorship opportunities due to restrictions than had done the same with industrial funding. Much of the problem was linked to an “increasing government custom and culture of controlling the flow of even non-classified information.”
Of particular concern is a provision of the Senate-passed Patient Protection and Affordable Care Act, which needs only to be approved by the House of Representatives before it receives President Obama’s signature. In a section creating a new Patient-Centered Outcomes Research Institute to conduct comparative-effectiveness research, the bill allows the withholding of funding to any institution where a researcher publishes findings not “within the bounds of and entirely consistent with the evidence,” a vague authorization that creates a tremendous tool that can be used to ensure self-censorship and conformity with bureaucratic preferences. This appears to be an effort in part to bypass the court order in Stanford v. Sullivan, a case involving federal contractual requirements that would have banned researchers from any discussion of their work without pre-approval by the Department of Health and Human Services. The order held that such blanket bans are “overly broad” and constitute “illegal prior restraint” on speech. The language in the Senate bill attempts to overcome this hurdle by eliminating prior restraint, but using the threat of post hoc punishment as an incentive for self-censorship. As AcademyHealth notes, “Such language to restrict scientific freedom is unprecedented and likely unconstitutional.” Given the higher propensity of government agencies to try to control the dissemination of scientific information, this is an alarming threat to the scientific process and to the utility of scientific research to inform good policymaking.
Another example of manipulation of scientific processes to support preordained policy preferences is in the area of climate policy. A number of scientists who support radical political interventions to prevent climate change have engaged in a systematic attempt to alter contradictory data and intimidate journal editors into rejecting papers presenting contradictory evidence.
On November 20, 2009, the climate science Web site RealClimate.org received a large file containing e-mail, downloaded by hackers, from the Climate Research Unit at the University of East Anglia, which has been at the center of research on climate change and quite influential with the United Nations Intergovernmental Panel on Climate Change. Those communications involved prominent climate scientists such as Michael Mann of Penn State University and Phil Jones of the University of East Anglia.
The public release of these documents created an immediate controversy, revealing data manipulation to cover inconvenient findings; efforts to intimidate editors into not publishing results that refuted their arguments; and a general contempt for opponents, including efforts to discredit them as cranks rather than address their arguments and evidence. Efforts to remove the editors of the journals Climate Research and Geophysical Research Letters for accepting research papers that raised questions about the magnitude of human-induced global warming are documented, as well as efforts to boycott Weather and other Royal Meteorological Society journals for requiring appropriate data and calculations to be submitted with all articles. The communications document resistance to the release and sharing of research data, including violations of British and American “freedom of information” requirements and the deletion of research data rather than making it available for other scientists to analyze.
The reasons behind this misconduct are clear. The tactics of those who wish to impose political change to address environmental issues consist largely of convincing the population and political leaders that the existence and threat of man-made global warming is a settled issue, based on a consensus of scientific information. The messiness of contradictory information belies that consensus. Hence the leaders of the movement have attempted to suppress such information.
The problem is not limited to the authors of the Climate Research Unit e-mails. The phenomenon has also been observed in how public agencies censor evidence to support rulemaking activities and shape public support for specific climate policies. For example, in a March 12, 2009, memorandum regarding the U.S. Environmental Protection Agency’s plans to regulate greenhouse gases under the Clean Air Act, the director of the EPA’s National Center for Environmental Economics forbade the author of an internal critique of the science behind the proposed findings to release his report outside the center. On March 17, the director wrote to the author, informing him that his critique would not be included or discussed, because
the time for such discussion of fundamental issues has passed for this round. The administrator and the administration has decided to move forward on endangerment, and your comments do not help the legal or policy case for this decision. … I can only see one impact of your comments given where we are in the process, and that would be a very negative impact on our office.
These cases highlight the temptations toward manipulation of scientific data to build support for favored political and economic outcomes. The purpose of systematic testing and evaluation of ideas, which we describe as “science,” is to allow us to differentiate between what Friedrich Hayek refers to as “facts” and “appearances.” Properly used, it gives us an objective means to identify the causes of problems and the potential impact of proposed policy interventions, which must also be evaluated in the context of moral and ethical values. When we abandon the values and practices of science, or pervert them to support a predetermined agenda, we elevate “appearances” and subordinate “facts.” Abandoning the objectivity of science to suppress evidence that does not favor the preferences of the censor undercuts the ability of the polity to make rational decisions. Such censorship is inconsistent with democratic ideals in that it denies venues for legitimate exchange of ideas through open debate.
While private misconduct is threatening enough, the growing practice of governmental censorship of scientific data may be even more frightening. Private censorship can be limited if a diversity of outlets exist for communication. Private organizations lack the coercive power of government, and no private organization—even large and wealthy corporations in the energy or pharmaceutical industries—possesses the power and resources of government.
Furthermore, a fundamental duty of a democratic regime is to ensure the conditions for open exchange of information and informed participation of citizens in governance. Violation of the letter and spirit of that duty undercuts the social contract that is the foundation for the legitimacy of the democratic state. Democracy depends not on the preferences of elites, but rather on a functional marketplace for ideas and vigorous debate between contending viewpoints.
Dr. Avery is an assistant professor of public health in the Department of Health and Kinesiology and the Regenstrief Center for Health Care Engineering at Purdue University. This article is © 2010 by The Cato Institute, and is reprinted by permission from Cato’s Briefing Papers Series.