This was a great surprise for me. There is little or no scientific evidence that policies influenced by research are better policies. And that’s why every think tank staffer and agency officer in the US should start by reading Using science as evidence in Public Policy, a recent (2012) report by the National Research Council Division of Behavioral and Social Sciences and Education.
Using Science as Evidence in Public Policy encourages scientists to think differently about the use of scientific evidence in policy making. This report investigates why scientific evidence is important to policy making and argues that an extensive body of research on knowledge utilization has not led to any widely accepted explanation of what it means to use science in public policy. Using Science as Evidence in Public Policy identifies the gaps in our understanding and develops a framework for a new field of research to fill those gaps.
The text throws a brilliant picture of the relation between research, the policy enterprise and political activity. Given the complex interaction of scientific data with values and the political interplay of interests,
Evidence-influenced politics is suggested as a more informative metaphor, descriptively and prescriptively, than evidence-based policy.
Even when values are at stake, scientists can legitimately advocate for attending to knowledge that accurately describes the problem being addressed or that predicts probable consequences of proposed actions. It is our normative position that if policy makers take note of relevant science, they increase the chances of realizing the intended consequences of the policies they advance. This is evidence-influenced politics at work.
This clear position notwithstanding, there have been very little advancements in the understanding of the use of scientific results as evidence in policy making processes since the 1970s, when the issue first emerged: the NRC report Knowledge and policy: the uncertain connection signaled the problematic relation of the two spheres.
The committee behind the report has put forward a research agenda to determine “whether, why, and how science is, or is not, used as evidence in public policy” in order to gather up an interdisciplinary effort with contributions from social psychology, behavioral economics, decision theory and organizational sociology. Let’s build a science of how policy makers work and we may find more relevant ways to make usable social science.
1. The serendipitous nugget
Carol H. Weiss’ works were foundational for the development of a new field that she called the sociology of knowledge application, with lasting effects in policy evaluation standards. In her seminal work Social Science Research and Decision Making, an important reference in the NRC report, Weiss criticized a simplistic conception of the influence of social science on policy makers. According to her, a deterministic notion in which research outputs have an immediate translation in new policy initiatives excludes the most usual ways in which science influences political action and raises the stakes enormously.
Policy nuggets, in fact, can only happen when the following eight circumstances are met:
- “Research [is] directly relevant to an issue up for decision”
- “[Research is] available before the time of decision”
- “[Research] addresses the issue within the parameters of feasible action”
- “[Research] comes out with clear and unambiguous results”
- “[Research] is known to decision makers”
- “[Decision makers] understand its concepts and findings and are willing to listen”
- “[Research] does not run athwart of entrenched interests or powerful blocs”
- “[Research] is implementable within the limits of existing resources”
Does this ever happen? Rarely. It’s interesting to remind the famous anecdote about how Paul Weyrich decided to found the Heritage Foundation when the American Enterprise Institute released a great, useful report on supersonic transport two days after the vote that killed the initiatives. Had the report appeared on time, it would probably had been a nugget. Since that day, timeliness (condition 2) has been one of the communicational values of the Heritage Foundation.
It may be interesting to measure the output of different think tanks communications using this set of requirements for relevance. This could be called the nugget test. Taking the test literally would actually be a quite complex and costly process, for it requires to identify many contextual factors, including stakeholders, relevant officers and decision points, and extracting qualitative information from decision makers. I wonder if it’s worth exploring? While I don’t think every think tank would agree it is desirable to force every project into nugget potential, this list is a good measurement of the distance between strictly academic research and policy research designed for impact. In other words, in going from data to recommendations there is always a leap, and the expertise of think tanks is to make it.
2. Networks for knowledge validation
The basic positioning of the social sciences in the nonprofit sector has implications for its use in policy making, notably in the number and workings of intermediary organizations—think tanks and advocacy organizations—and in the heavy presence of interested private funding.
This committee diagnostic is a less radical, more optimistic formulation of Thomas Medvetz’s main argument in Think Tanks in America:
The growth of think tanks over the last forty years has ultimately undermined the value of independently produced knowledge in the United States by institutionalizing a mode of intellectual practice that relegates its producers to the margins of public and political life.
Together with the complicated balance between autonomy and dependence from funders, the validity of scientific research is perhaps the most vulnerable aspect of think tanks. Fellows don’t publish at peer reviewed journals because they want to be clear and concise and to be read by a wider public, and that’s a good thing. But peer review processes should not disappear completely as an alternative or a complement to the quality assurance processes that are in place inside many of the most reputable institutions. When the Center for Global Development approved their data and code sharing policy in 2011, they were saying that this matters. Colleagues at other institutions should have the right incentives so they can find time to replicate methods and confirm results. In an article for Science from the same year, Gary King supported the idea that “we need to nurture the growing replication movement”. Networking is not only a DC thing. “Social scientists need to continue to build a common, open-source, collaborative infrastructure that makes data analysis and sharing easy”.
Things are changing. The McCourt School of Public Policy at Georgetown University is opening a Massive Data Institute that “will use Big Data sets to increase understanding of society and human behavior and thus improve public policy decision-making”. There are many risks in embracing the big data paradigm when we don’t even know the influence of traditional social science, but there are many opportunities. Hopefully replicability and collaborative networks should be in the agenda.