Making K* Work for Your Research Findings

2012•05•21 Brendan F.D. Barrett Osaka University

UNU communications head Brendan Barrett shares insights derived from a UNU Institute for Water, Environment and Health conference that focused on K* (K-Star) — a spectrum of ideas that covers research communication, science push, knowledge translation, adaptation, transfer and exchange, knowledge brokering and mobilization, and policy pull.

• • •

To sum up the underlying need for the recent K* Conference 2012, I borrow the words of a co-participant who explained that “as we are seeing with the climate debate and other ‘wicked problems’, it is not sufficient to assume that scientific consensus about the facts will be influential in policy or the wider community”.

Andrew Campbell, writing on Charles Darwin University’s blog, prefaced that remark by highlighting that in the science–policy interface, and in understanding how knowledge is utilized, it is important to be aware of “context … and how and why it affects the choice of and likely effectiveness of knowledge strategies”.

All too often, we assume that our research findings are important (they are to us, and whoever funded them) and, therefore, their significance should be plainly obvious to our target audience. If this is the case, all we really need to do is communicate those findings to that target, or to disseminate them as widely as possible, and consequently there will be an impact of some sort.

We may struggle to measure that impact. We may find that the pathways that lead our research findings to influence policy are opaque. We may have to rely on anecdotal evidence rather than any hard metrics when assessing impacts. In the event that we find limited recognizable impact, we may conclude either that we scientists and academics are ineffective communicators or we might search for some other excuse (e.g., bad timing, a lack of resources allocated for communications, etc.). But we may ignore the wider context within which the research findings are being communicated, or perhaps we may have limited capacities/time to devote to understanding that context.

It may be here that K* professionals potentially comes into their own. I use the term “potentially” because at this point many may argue that this opinion is purely conjecture.

What is K*?

K* (K-Star) is a term coined by Alex Bielak of the UNU Institute for Water, Environment and Health (UNU-INWEH) as shorthand for a spectrum of ideas that includes research communication, science push, knowledge translation, adaptation, transfer and exchange, knowledge brokering, knowledge mobilization and policy pull.

Now, it could be the case that the introduction of this term takes us one step forward by putting all the knowledge-related ideas and concepts in one basket. Or, it may also, as independent researcher Enrique Mendizabel points out, be the cause of even more confusion — taking us one step back.

But it is here, I would suggest, that Alex Bielak and his colleagues take us a second step forward when they argue that, when thinking about K*, we need not worry about debating specific elements, but should look across the field of how knowledge is used in the research/policy/action interface, what issues are faced, how we can learn across sectors (e.g., bring climate scientists together with health specialists, with development specialists, etc.), how to stop re-inventing wheels, how to recognize the value of intermediaries, how to increase impact and how to demonstrate that impact. So my tally implies that thinking about K* takes us two steps forward in understanding the dynamics of the science/policy interface, but one step back by creating more jargon.

In order to address all of the above, the K* Conference 2012 was convened by UNU-INWEH (with advice and support from an International Advisory Committee) in Hamilton, Ontario, Canada, on 24–27 April, and a green paper has been prepared and will be further elaborated in the near future.

The conference brought together nearly 60 representatives from academia, think tanks, non-governmental organizations, national agencies and international organizations to survey the K* landscape through a series of case study panels, interactive discussions, e-polling (on priorities for action) and open-space technology.

Over 100 more participated virtually via WebEx (web conferencing) and could raise questions and poll on priorities. It was a very professionally organized and stimulating event (one of the best that this writer has attended). There was a veritable online buzz of social media activity around the conference via Twitter (#kstar2012) and blogging. For example, there were 450+ tweets, from 61 different Twitter users, from 12 countries before, during and just after the conference.

As a participant, I felt that we took a third step forward as we surveyed, in a dynamic and interactive manner, the current state of understanding around K*. We prioritized three areas where we would like to continue working together, as summarized by David Phipps of York University. These are:

The overall response to the conference was very positive from the participants and beyond. Ingo Peters, IT consultant, remarked “… the conference built a lot of momentum, it put into motion a mechanism by which we can continue to share our passion, our stories, and our insights in the field of K*”.

What does this mean for a university?

Perhaps Andrew Campbell best captured the implications of this endeavour when he said: “Many science organisations still struggle to get far beyond conventional scientific publications, media releases and websites, perhaps dipping a toe into social media. To be fair, the existing performance metrics, funding and reward systems for academic research act as a disincentive to go further along the track to get involved in very meaningful, interactive co-production of knowledge, especially if that might reduce the flow of publications in ‘high impact’ refereed journals.”

It is a point that we laboured in a recent article entitled “Communicating climate science online”. Campbell reinforces our observations when he states that “new technologies offer extraordinary potential to share knowledge, to facilitate interaction and to accelerate social learning. They are far more than just cool gadgets, but enable new pedagogies and fundamentally different ways of tackling old problems”.

Yet, he also argues that “it is important to get the basics right: the underlying data systems that make information discoverable, searchable and accessible, the integration and synthesis tools that can help pull information from diverse projects together to meet a given need, and the ‘big C’ science communication tools that make it easier for scientists to promote their outputs and for media, government, industry and the community to find and access intelligible information”.

The term “big C” applies to strategic or corporate communications — the work that your normal communication department does. Alex Bielak et al. explain it as ensuring “consistent over-arching messaging internally, and to the public at large”.

This contrasts with “little c” communications that is characterized as everyday, organic, meaningful interactions between K* practitioners and the key stakeholders. Such practitioners (individuals or groups) need to be capable of “initiating dialogue and operating in the worlds both of the scientists and of science users, be able to fashion research outputs into language that can be understood by the users, and help develop researchable questions from articulated knowledge needs and deliver the information in timely fashion. They should be trusted, valued and respected by both communities. The information they provide must be based on robust evidence, obviating attempts to blindly navigate the science and policy swamps, and thus reducing transaction costs at the science–policy interface”.

At present, the role of such individuals and groups may not be fully recognized or appreciated by the institutions for which they are employed.

Again Campbell hits the nail on the head when he states: “But for many of these people, their core function is to promote the research outputs of their own institution — a conventional science communication role — with limited scope to influence research priorities or methodologies, or to promote science done elsewhere. In my view such roles should be more accurately labelled, and people should not be called brokers unless they have the capability and the mandate to negotiate in both directions.”

So this is still very much work in progress. Many more important lessons will emerge from the K* Conference 2012 over the coming months, and there is a whole series of video interviews with the participants online that I recommend you explore should you wish to gain deeper insights.

From the perspective of the United Nations University, it may be good to end by sharing the video of Vice-Rector Jakob Rhyner responding to the question: “What does K* mean for UNU?”

Creative Commons License
Making K* Work for Your Research Findings by Brendan Barrett is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Author

Brendan F.D. Barrett

Osaka University

Brendan F.D. Barrett is a specially appointed professor at Osaka University in the Center for Global Initiatives and an adjunct professor at RMIT University School of Media and Communications. His core areas of expertise include ethical cities, urban transitions, sustainability science, and science/research communication.

Brendan worked with the United Nations in Japan between 1995 and 2015, with the UN Environment Programme and the United Nations University (UNU). He is currently a Visiting Professor at the UNU Institute for the Advanced Study of Sustainability.

Previously at UNU he was the Head of Online Learning and Head of Communications where he oversaw the development of interactive websites and video documentaries on complex social and environmental concerns. As a result, Brendan has extensive experience in science communications and launched the Our World web magazine in 2008.