WHP Logo

The White Horse Press

Environmental Values

Contents of Volume 19

Other volumes of EV

Environmental Values

Editorial, Vol.19 No.2

Censoring Science in Research Officially

Environmental Values 19 (2010): 141-146.
doi: 10.3197/096327110X12699420220473

Access published pdf version Hume's fact-value dichotomy is something which appeared useful during the enlightenment but which today has become part of a dangerous rhetoric being used by some for the control and manipulation of information. Others seem unaware of the implications when they draw on this divide to describe the natural sciences as separated from political process, and subject to different standards of conduct from the social and policy sciences. This line of reasoning can quickly slip into classifying the social sciences as mere means of communicating what the 'real scientists' have discovered. Sciencepolicy failures are then interpreted as matters of poor communication of 'the truth' to an ignorant public in need of education. Simultaneously the natural sciences are then promoted over other knowledge and laid susceptible to manipulation through lack of an explicit account of the political processes within which they are embedded.

Natural scientists cling to the idea that they provide 'the truth' and that empiricism leads to objective and universal knowledge. The common political term now in vogue is 'evidence based science' for (in)forming policy. In reality the validity and meaning of knowledge for public policy is contextual, complex, subject to change and unknowns. However, being aware of the vagaries surrounding knowledge and its creation can be compatible with accepting a level of objectivity and the contribution made by scientific method. What a broader understanding of knowledge creation does is to place the natural and social sciences within the context of limits upon human ability to comprehend. This means being aware of how we simplify and abstract to gain understanding, view and interpret the same phenomena from different perspectives, and shape knowledge in light of who we are and how we operate. The way in which humans understand the world is, as a result, highly value laden.

Yet, the values entailed in research are often hidden, even though the way in which projects are funded, the type of work favoured and the framing of that work all make self-evident a set of values. For example, in 2006 a major new climate research programme at the Commonwealth Scientific Industrial Research Organisation (CSIRO) in Australia was framed around adaptation because of the political unacceptability of researching mitigation. Australian public research funding has also discriminated against renewable energy e.g., favouring clean coal technology. Similarly, in 2005 I attended a presentation at a sustainable development network conference in London by the UK's chief scientist, David King, where he used nuclear power as his sole 'example of alternative energy' futures.

Scientific and industrial research organisations propound technologies as if they were merely engaged in value free activities which could never be questioned. Their own research priorities belie the proposition. There is nothing value free about funding technologies based on coal and nuclear as opposed to wind and wave. There is nothing value free about promoting genetically modified organisms over organic agriculture. There is nothing value free about establishing research on second generation bio-fuels as opposed to alternatives to automotive transport, demand control and behavioural change. There is nothing value free about supporting technological or market approaches to environmental problems over taxes or direct regulation. Yet all these things have been and are being undertaken in the name of objective, value free scientific research.

That the science-policy interface is a contested space in which the future of humanity is being determined too rarely gets conscious societal recognition. From microwaves to nanotechnology, the scientific community engages in changing the future and imposing its own visions and values. This supports very specific industrial sectors and power groups in society. Hence the contrast between levels of research funding (e.g., nuclear vs. solar). What gets funded, by whom and for what purposes means research is integrally entwined with public policy and political process.

If we accept that the funding of knowledge creation is inherently political the scope for objective empirical truth seeking seems severely reduced. Openly recognising funding sources and their influence is then highly important. Manipulation of information is evident when climate sceptics are funded by oil corporations or public research agencies are barred by ministers from criticising government policy. However, deliberate attempts to skew information may employ a range of strategies from crude closure of research programmes or firing of staff to the more subtle use of publications procedures via management and administration protocols to censor and rework critical findings. Identifying abuse in highly contested areas of environmental, health or other policy may then prove far from easy.

Yet, some claim that the public must trust science because of all those beneficial devices (e.g. wifi, microwave ovens) which surround their daily lives. Trust in science is about as evident from the ownership of a microwave oven as trust in tobacco companies is evident from the purchase of cigarettes. The proliferation of technological artefacts in modern (post-) industrial society has rather more to do with economics and marketing than public trust. Indeed, science can apparently operate in society without trust.

The inconvenient truth is that humanity has repeatedly been confronted by the fact that supposedly beneficial technologies, materials, chemicals, production processes and consumer products have proven harmful both socially and environmentally. Down-playing the negative side of knowledge creation and implementation has been a political imperative, often driven by a supposed military necessity. Indeed, the political economy of the industrial-military complex has played a key role in determining the type of knowledge pursued, produced, concealed and revealed. This has also had major consequences for how humans interact with the environment; hence, the ever present rhetoric of war, with its recommendation of control and conquer strategies using the latest technological hardware to defeat 'the enemy'. Scientists are then co-opted as the technological soldiers in the war on the environment.

As such there is a presumption that repairing ecological damages using technological correctives is as good as never having created a problem in the first place. In countering this idea, Hale and Grundy (2009) note that technology is far from neutral, and they raise the important issue of respect for others. Technology as an environmental damage corrective changes how humanity perceives its responsibility towards others and Nature. What would be deemed wrongful actions can now be justified as legitimate because technology is available to repair or prevent the damage. For example, we can enhance the Greenhouse Effect as long as, say, seeded particles in space reflect away enough incoming solar radiation. Technological optimism enters into an ethical and environmental values debate which goes well beyond traditional scientific comfort zones. Technology itself alters how humans interact with and value the world around them.

The problem confronting research organisations today is whether to promote knowledge creation in the mode of 1950s science fiction or twentyfirst- century science-policy reality. In the 1950s scientists were seen as elitist founts of knowledge, characterised as truth-seeking middle-aged men in white coats, working on controlled laboratory experiments to produce technology which always proved beneficial to society. Their mission was to boldly reduce all unknowns to nothing. The reality is that men and women from diverse backgrounds perform a variety of formal and informal roles in the production of new information in often highly charged political contexts, involving complex interactions with society and having unknown outcomes. Research creates new uncertainties as it attempts to reduce old ones. Understanding the world around us, let alone the science-policy interface, then requires skills from both the social and natural sciences on an equal footing. Yet the natural sciences are still held to be where an undeniable and singular truth must lie. In the last issue of Environmental Values the challenge to science raised by uncertainty was discussed in the context of genetically modified crops (Myhr, 2010). There the point was made that scientific disunity reinforces public awareness that expert opinions are susceptible to influence by institutional, economic and political factors. Suspicion arises as to the motives of scientists, companies and public agencies. The same problem arises in other environmental areas, e.g. human induced climate change. The power to frame a research discourse allows pre-determination of the range of alternatives and answers. Pretending there is one singular and certain truth leads to closure and self-righteous assertion which then inevitably runs the risk of exposing perfectly valid information to ridicule as lies. In contrast, admitting equally valid multiple perspectives, and strong uncertainty, leads to the need for debate and informed judgement.

The question is, on what basis can good judgment be made about scientific findings, adoption of technology, regulatory approaches and design of institutions? Public policy is being informed by a range of organisations, amongst which are universities, government funded agencies, non-governmental organisations and corporate-funded vested interest groups. The move towards making research 'self-funding' has meant pushing formerly independent researchers into the hands of clients who pay for a service. Those clients may be corporations or political parties. Either way they do not pay for results which criticise their values and beliefs. Public agencies find themselves being unable to criticise the incumbent government, and universities find themselves indebted to benefactors, e.g. a supermarket chain, car or computer manufacturer, oil company or chemical producer. Successful managers of research raise funds and in doing so develop relationships with their funders. Problems most clearly arise when the supposedly neutral are actually manipulated by powerful political groups, or results from vested interest groups are dressed-up as impartial.

Hence appointing senior managers in public research organisations on the basis of their strong ties with specific industrial sectors is inevitably to risk biasing research outcomes. Power is handed over enabling the suppression and censorship of results, reports and publications regarded as unpalatable to the managers' corporate benefactors and social networks. Those at the top of the publicly funded research food chain, and in key managerial positions within it, need to be subject to public scrutiny. In any case, research organisations and their managers cannot be allowed to hide behind an asserted need for independence of internal procedure and confidentiality. Free speech and contestation without penalisation are basic research requirements.

The hope of many researchers holding to truth-seeking science is that by focusing on a specific research agenda and avoiding overt public policy statements their work can be conducted in a way which is separated from the messy world of politics and value judgments. Yet as humans these same people hold values and make judgments on a daily basis, including their judgments over what is 'good science', 'quality research' and 'valid argument'. The dangers here are not that judgments are required but that values are concealed behind a veneer of scientific respectability rather than openly debated.

The hope seems to be that within a framed and funded set of research some form of 'objectivity' and balance can prevail. Good scientists do good science as judged by their fellow good scientists who appeal to 'facts', follow the correct procedures and avoid statements of value judgements. Reliance then falls upon methodology, process and procedure to salvage the promised impartial outcomes. The dominant form of creating trust in research findings appeals to empiricism and peer review within an epistemology seeking consensus and certainty. In contrast, strong uncertainty (social indeterminacy, ignorance) and conflicting values mean understanding the world as a plurality of different legitimate descriptions, reflecting different perspectives and commitments. As Myhr (2010) concludes, quality is not related to certainty and consensus but is a characteristic of a process involving mutual learning and the identification and negotiation of relevant normative standards.

The pretence is that mathematics, models and computer simulations can somehow convince the public that information using a 'scientific method' is just as objective as the old-fashioned laboratory experiment was thought to be. Social sciences are meant to follow suit. The economic paper without the model or statistics is clearly regarded as an opinion piece, not objective or evidence based science. More generally, the social sciences are classified as mere means of communicating what the 'real scientists' have discovered. Research findings, reports and papers produced without the trimmings of supposed objective method then run the risk of dismissal. Those claiming to hold the correct approach to knowledge creation and communication regard such work as an easy target for derision. Yet, in dismissing findings from such disciplines as the social sciences or applied philosophy, these deniers of knowledge readily expose their own political and social values. Thus when they try to apply their tests of objectivity what arises is political censorship.

Not that such censorship is restricted to the social sciences. The wrong type of knowledge can just as easily arise from the natural sciences and require classification as unscientific opinion, value judgement or some such similar designation as unworthy. Critiques of 'quality' and 'method' are often employed with the aim of avoiding engagement with the core arguments or evidence.

The point is not that all information is equally relevant or valid, nor that judgments of quality can or should be avoided, but rather that determining relevance, validity and quality is a value laden process open to dispute. Naively assuming the public should trust in 'science', whether natural or social, is to ignore what our experience of environmental problems and understanding of environmental values have taught. Science is a contested contributor of societal information which may help or harm in unforeseen ways. Open debate and discussion are essential for understanding new knowledge. Unfortunately, there are those, even in supposed democracies, who still believe they have the right to control that debate, stop research, ban publications, censor writing and determine what the public gets to hear.

CLIVE L. SPASH

References

Hale, B. and W.P. Grundy. 2009. Remediation and Respect: Do Remediation Technologies Alter Our Responsibility?. Environmental Values 18(4): 397-415.
Myhr, A.I. 2010. The Challenge of Scientific Uncertainty and Disunity in Risk Assessment and Management of GM Crops. Environmental Values 19(1): 7-31.

Other papers in this volume

THE WHITE HORSE PRESS
The Old Vicarage, Winwick
Cambridgeshire, PE28 5PN, UK
Tel: +44 1832 293222