The scientific revolution in the late sixteenth and early seventeenth centuries was based on the idea and practice of open science.
That is the view of Paul A. David, Professor Emeritus of Economics at the Stanford Institute for Economic Policy Research who convincingly argues that open science “represented a break from the previously dominant ethos of secrecy in the pursuit of Nature’s Secrets, to a new set of norms, incentives, and organizational structures that reinforced scientific researchers’ commitments to rapid disclosure of new knowledge”.
Reflecting upon some of the criticisms of the contemporary ethos surrounding the “Climategate incident” at the end of 2009, we may ask whether climate science would benefit by being more firmly grounded in the principles of openness, perhaps along the lines of the free and open source software communities and open content movements. The concept of openness behind free and open source software describes a mode of creative knowledge production and sharing in which individuals and communities freely generate and adapt or remix resources (content or software) without licensing restrictions.
Proposing a move to a new model of climate science in no way suggests that climate change is not a real and present threat or that we cannot rely on the integrity of our contemporary climate science institutions and scientists. Rather, the argument here is that learning from the way free and open source software projects and communities work may be an important way forward in engendering more effective global, national and local responses to climate change. We are also not suggesting that Climate Science is currently closed. Far from it. In a recent interview, James Hansen remarked that:
“The NASA temperature analysis agrees well with the East Anglia results. And the NASA data are all publicly available, as is the computer program that carries out the analysis. Look at it this way: If anybody could show that the global warming curve was wrong they would become famous, maybe win a Nobel Prize. All the measurement data are available.”
Our question is whether climate science could become radically open and whether this would bring about significant benefits? At the same time, we recognize that any new approach may be viewed by many as a political non-starter because it would require a major overhaul of the status quo of global climate science as channeled through the United Nations mandated Intergovernmental Panel on Climate Change (IPCC).
However, our proposal would be for incremental change. Something that is also very difficult to achieve. The goal would be to explore the potential to radically further open climate science. For example, initially climate science organizations or researchers could adopt open licenses for their work. Following that, they could re-examine their modes of collaboration and aim towards new forms of meritocracy. Next, climate scientists could look at the way knowledge in their field is generated and shared (through open peer review processes).
Back to basics
The 2007 book by Sulayman K. Sowe and his colleagues, entitled Emerging Free and Open Source Software Practices, explains how the first codification of free and open source principles and practices occurred over 50 years ago.
Related to this, we believe that openness in science is not an alien concept for the scientific community; instead it can be understood as a basic principle for how good science should work. This is perhaps best illustrated by an organization like CERN (the European Organization for Nuclear Research), which is committed to the use of free and open source software and makes its data and experiments freely available. In July 2010, CERN announced the publication of the first results of the Large Hadron Collider experiments under a Creative Commons licence.
At that time, CERN’s Head of Open Access Salvatore Mele explained that “CERN has become a supporter of Creative Commons to acknowledge the contribution that its licenses make to accelerating scientific communication and simplifying the way researchers share their work.”
It is no accident that in 2005 Science Commons was launched to make research, data and materials easier to find and use. A quick scan of Science Commons’ partners reveals that the majority work from within the fields of biotechnology and life sciences — with climate scientists nowhere to be seen.
Many of you may know of Creative Commons and recall that it was launched back in 2001 with the goal of helping “you license your works freely for certain uses, on certain conditions; or dedicate your works to the public domain.” Creative Commons represents an attempt to move the world away from what was becoming an ever more restrictive system of copyrights. You may have also noticed that Our World 2.0 articles and videos (except those produced by certain third parties like the Guardian) are covered by Creative Commons licenses, with the goal of making them easier to share across the Web.
CERN happens to be the recognized home of the World Wide Web, through the work of its inventor British computer scientist Tim Berners Lee. Some commentators, like Don Tapscott and Anthony D. Williams argue in their book entitled “Wikinomics” that the Web supports the emergence of something called “Science 2.0” — that is, massive online scientific collaboration, for instance in relation to global efforts to sequence the human genome. They argue that “…today a new scientific paradigm … is on the verge of ignition, inspired by the same technological forces that are turning the web into a massive collaborative work space.”
And continue “…the Web will forever change the way scientists publish, manage data, and collaborate across institutional boundaries. The walls dividing institutions will crumble, and open scientific networks will emerge to take their place. All of the world’s scientific data and research will be available to every single researcher — gratis — without prejudice or burden.”
But we are not there yet. One reason may be that in order for this change to occur, more scientists actually need to look to other fields to understand how openness really works; the best example could be the open source software community or the open content community.
Open source software methodology
There is a really interesting presentation by Danese Cooper from 2004 that captures the essence of the open source development methodology. She argues that each open source project is governed by consensus, is based on leadership by reputation, involves massive (i.e., involving any who wish to be) public peer review, is technology focused, characterized by a transparent process and to some extent, inevitably, disrupts traditional business practices.
It is Cooper’s contention, however, that there are some fundamental challenges with respect to this methodology because it is counter-intuitive to existing management practices given that it involves some loss of control and predictability. Giving up control may be something many scientists find unattractive. Likewise, concerns have been expressed regularly about the poor communication skills of many of our climate scientists.
Cooper tries to reassure us that “anarchy” does not rule in open source projects because in reality it is usually only those with “reputation” that can commit code. At the same time extensive peer review works as a very effective form of quality assurance, and these quality procedures are further supported by published road maps that guide the expectations of those involved. The free and open source software (FOSS) methodology is best exemplified as shown in the figure below.
Free access to code and/or content and the open nature of the FOSS development process means that almost anyone with appropriate skills can access and work on project artifacts. She/he can contribute his or her modified or derived work to the project’s repository for the next community member to critic and further improve upon. This kind of ‘handing-the-baton knowledge relay’ leads to incremental innovation or improvement in the quality of the artifact being worked upon.
Compared to knowledge patenting regimes, FOSS knowledge and artifacts are community-based peer products. People are free to opt out or fork and continue on producing ‘new’ versions. Anyone with access rights can obtain or checkout code from the project repository to begin the software development process. Some just acquire the source code and no longer take part in project activity (Exit 1). Others continue the development process by modifying code, bug fixing, and adding new functionality.
Participants dissatisfied with a project’s development, or how it is managed and coordinated may exit the cycle with the modified code to start their own “mutant” version of the project (Exit 2). Still active members may continue to participate in the project by committing their work to the project’s knowledge repository. However, some members may terminate their involvement after some time. The development process continues in perpetuity; generating ‘new’ knowledge artifacts for the next generation of community members.
Cooper makes a very interesting point that finds parallels in the area of climate science: control is over-rated because security died with the Internet, suggesting that it is better to be proactively open rather than reactively protecting your data with ever more challenging security measures. This is something the scientists at the Climate Research Unit at the University of East Anglia understood all too well when they discovered that their emails had been either stolen or leaked.
Could the IPCC go open source?
The IPCC is a surprisingly small organization with a modest budget, with only 10 employees in its Secretariat. Nevertheless, it likes to describe itself as follows:
“The IPCC is a huge and yet very tiny organization. Thousands of scientists from all over the world contribute to the work of the IPCC on a voluntary basis as authors, contributors and reviewers. None of them is paid by the IPCC.”
The voluntary nature of the IPCC reflects to some degree how programmers volunteer their time for open source software projects.
However, there is an important distinction to be made here. IPCC authors, contributors, reviewers and other experts are selected by the “Bureau of the Working Group from a list of nominations received from governments and participating organizations”. So in many respects this is a closed process in terms of who can and cannot be involved. For example, for the Fifth Assessment Report due by September 2014, a total of 831 authors were selected from over 3,000 nominations.
So could IPCC participation be opened somehow? Well, it may not make much sense to try to open up the entire IPCC process immediately, bearing in mind the recent spate of criticisms. But perhaps there could be scope for a phased opening. It is difficult for an institution to be innovative when whatever it does is subjected to intense scrutiny and attack. Hence, in the current environment there may be limited space for major reform, experimentation and creativity around how we are approaching climate science.
The recent review of IPCC processes and procedures by the InterAcademy Council published in October 2010 illustrates this dilemma and had few things to say on opening up the IPCC. Referring to the peer review process the following statement was made:
“Although an open review potentially improves the [IPCC] report by increasing the level of scrutiny and widening the range of viewpoints offered, it also substantially increases the number of review comments. Drafts of the Fourth Assessment Report drew 90,000 review comments (an average of a few thousand comments per chapter), stretching the ability of Lead Authors to respond thoughtfully and fully. A more targeted process for responding to reviewer comments could both ensure that the most significant review issues are addressed and reduce the burden on authors, who currently must document responses to all reviewer comments.”
The question here is how would the open source software community respond in a similar situation, where there were 90,000 errors/issues in code to be addressed? There are many challenges that would have to be resolved such as how final decisions would be made, how disagreements are handled and who would be accountable for the data and analysis, who would ultimately be responsible? In some respects, Wikipedia — while far from perfect — is one model that can guide our thinking on how some of these challenges could be handled.
One way forward may be for the IPCC to establish a new, experimental Working Group on a specific topic — with the mandate to adopt a truly open science approach that draws from open source methodologies. This could run in parallel to the existing Working Groups involved with the Fifth Assessment Report. Perhaps we could learn a great deal from this kind of experiment and the lessons from it could be implemented for the sixth assessment report.
What do you think? What would an open source methodology look like for the IPCC? Could climate science position itself at the forefront of a new open source-based scientific revolution?