These days academic research is expected to have some degree of impact ‘out there’, in the ‘real world’, away from the academy. This is not a bad thing, but the ways in which impact has played out in practice assumes a number of things about the academy and the real world that need to be thought through. The view that the ‘real world’ is somehow separate from the academy creates a situation where the world of ideas is separated from the world of action. The question is, who does this view suit? What is gained (and lost) by regarding the academic world as something that is separate, apart from the world of being and doing?
This is not to say that the insistence on relevance and impact is placing unfair demands on academic research. My argument is that the policies and processes of assessing impact have intended and unintended consequences (with positive and negative implications). Rather than accepting this situation it might be more productive to have a conversation about how impact should be measured and what counts as a ‘good impact’.
For example, an alternative approach might be to try to find ways in which the world outside of the university might be made more amenable to principles and processes of academic research, i.e. what organisations outside the University might be able to learn from the way academics conduct research. More often than not, the emphasis is placed the other way around, with academics required to demonstrate a value and a role for their work in the so called ‘real world’. Somewhat perversely, this insistence on demonstrating the relevance of work conducted in the university , to practice and processes in the ‘real world’ actually creates barricades between these two social worlds. Rather than breaking down barriers, the constant insistence on demonstrating value functions to maintain those barriers. Instead of then seeking to integrate social worlds, it actually creates, constitutes and maintains barriers between these different settings.
Decisions are then made about different pieces of research based on whether that research has had ‘enough’ impact. This in turn performs a gatekeeping role, where some research (good research) is regarded as ‘relevant’ outside the academy, and other research (bad research) is not. This creates a new hierarchy, where the value is based on criteria which is developed and deployed by external organisations, separate from the professionals carrying out the work. This brings us then to consider questions of power and control, as it seems that part of the impact agenda might be about who gets to decide what counts as useful (or impactful) research. We now have gatekeepers and policies to externally assess and determine what research has high value, low value and no value.
One possible response to this, for the academic researcher, is to find ways in which the value of the research can be assured both internally (in terms of the academic value of the research) and externally (in terms of the ‘real-world’ value of the research). One such attempt at this is the evidence-based movement. There is a very well developed ‘evidence-based medicine’ movement, predicated upon ‘trusted evidence, informed decisions leading to better health’. Similarly, there is the ‘evidence based policy’ movement, but it does not tend to have such snappy straplines and is blighted by more mundane concerns about the slippage (in policy research and practice) between ‘evidence-based policy’ and ‘policy based evidence’. In characterising this policy context, Paul Cairney, a policy scholar, states:
Some express the naïve view that policymakers should think like scientists and/or that evidence-based policymaking should be more like the ideal of evidence-based medicine in which everyone supports a hierarchy of evidence. Others try to work out how they can improve the supply of evidence or set up new institutions to get policymakers to pay more attention to it.
This is an apt description, and it speaks to the artificial boundaries I identified above. On one side advocates say those in the ‘real world’, need to pay more heed to our scientific way of doing things. On the other, there is a need to find ways which similarly bring the policy makers around to the researchers way of thinking. In other words to make academic knowledge more accessible. In both instances the prevailing emphasis is on getting ‘them’, out there in the ‘real world’, to come into the academic field. So, in one sense, this could be regarded as an attempt (a counter move) to maintain a degree of power and control over those that might impose external research value metrics.
At the end of the day, this approach does little to move us on. And here I use ‘us’ in the collective sense, to refer to us all, rather than to identify specific communities of people, such as academics or policy wonks. Instead there is a need to develop understanding in both fields, of process and practice and to develop ways in which they might be dovetailed with each other in much more mutual, less hierarchical, processes.
For example, (despite the impact agenda), there is clearly still a lack of understanding of academic evidence and the policy process and this speaks to misunderstanding and mis-characterisations from both sides. From the academic side, this involves a naïve reading of policy contexts. Policy is painted as something that can be relatively easily influenced, with no need to address questions of complexity, or other disciplinary and professional boundaries. The assumption demonstrates as Cairney has argued, that researchers are engaging with the policy process that they wish existed rather than the processes which actually exist. Policy in any social world does not exist in a vacuum but is predicated upon internal and external hierarchies of evidence, and processes and practices of influence, patronage and lobbying. These, and this should not need spelling out to social scientists, are far removed from two throw away lines in the conclusion of an academic paper stating that the preceding research has policy relevance and impelling policy makers to sit up and take notice (whoever those policymakers might be).
In terms of the policy, even if the academic evidence is of the highest possible standard, this does not necessarily guarantee that this evidence will be picked up by policy makers or politicians. If there is no political case for the policy, there will be little chance of the evidence being implemented. The point is that demonstrating value is not enough, there is also a clear need to demonstrate the political expediency of any proposed policy, (regardless of the evidence). It is in this context that we see policy-based evidence, rather than evidence based policy, where policymakers may make very selective use of the evidence in a way that supports their view and denigrates another view. For example, consider the reports about evidence used to inform government policy around fixed odds betting terminals (FOBTs) that emerged on Monday last week. Following the November 2018 budget which delayed changes to the maximum stake, cabinet minister Tracey Crouch resigned her post in government. The delay in reducing the stake from current levels was justified around concerns of 15,000-21,000 job losses. However, there were also allegations that a prominent pro-gambling MP had influenced the process. The story that broke initially last week was that the purported job losses were based on a report written for the gambling industry by the KPMG accountancy firm. KPMG said the report was “performed to meet specific terms of reference” agreed with the bookies’ trade body, adding that there were “particular features determined for the purposes of the engagement”. They continued that “the report should not therefore be regarded as suitable to be used or relied on by any other person or for any other purpose.” By Wednesday, the u-turn had been overridden (not sure what the phrase for doing a u-turn on a u-turn is) and the original position returned, apparently on foot of Crouch’s resignation and a threatened rebellion by MPs’. Whither questions of impact in this chaos?
It is in this context in which research evidence, policy and process all play out. One context is not better or worse than the other. A failure to engage with the dynamic complexity of evidence practices, policy practices and processes runs the risk of reifying the mischaracterisations of the fields of academic research and of the practical ‘real world’. In turn this reifies and maintains barriers between universities and a purportedly separate ‘real world’. We should be trying to remove as many barriers as possible so that universities are not seen as separate from the communities that support them and the wider social world.
Authors note: some of the material contained in this blog previously appeared in a co-authored piece, ‘Turning psychology into policy: a case of square pegs and round holes?‘, with Carl Walker and Danny Taggart, which was published under creative commons licence in Palgrave Communications.
Barriers to impact? On research in the ‘real world’
by Ewen Speed Nov 21, 2018These days academic research is expected to have some degree of impact ‘out there’, in the ‘real world’, away from the academy. This is not a bad thing, but the ways in which impact has played out in practice assumes a number of things about the academy and the real world that need to be thought through. The view that the ‘real world’ is somehow separate from the academy creates a situation where the world of ideas is separated from the world of action. The question is, who does this view suit? What is gained (and lost) by regarding the academic world as something that is separate, apart from the world of being and doing?
This is not to say that the insistence on relevance and impact is placing unfair demands on academic research. My argument is that the policies and processes of assessing impact have intended and unintended consequences (with positive and negative implications). Rather than accepting this situation it might be more productive to have a conversation about how impact should be measured and what counts as a ‘good impact’.
For example, an alternative approach might be to try to find ways in which the world outside of the university might be made more amenable to principles and processes of academic research, i.e. what organisations outside the University might be able to learn from the way academics conduct research. More often than not, the emphasis is placed the other way around, with academics required to demonstrate a value and a role for their work in the so called ‘real world’. Somewhat perversely, this insistence on demonstrating the relevance of work conducted in the university , to practice and processes in the ‘real world’ actually creates barricades between these two social worlds. Rather than breaking down barriers, the constant insistence on demonstrating value functions to maintain those barriers. Instead of then seeking to integrate social worlds, it actually creates, constitutes and maintains barriers between these different settings.
Decisions are then made about different pieces of research based on whether that research has had ‘enough’ impact. This in turn performs a gatekeeping role, where some research (good research) is regarded as ‘relevant’ outside the academy, and other research (bad research) is not. This creates a new hierarchy, where the value is based on criteria which is developed and deployed by external organisations, separate from the professionals carrying out the work. This brings us then to consider questions of power and control, as it seems that part of the impact agenda might be about who gets to decide what counts as useful (or impactful) research. We now have gatekeepers and policies to externally assess and determine what research has high value, low value and no value.
One possible response to this, for the academic researcher, is to find ways in which the value of the research can be assured both internally (in terms of the academic value of the research) and externally (in terms of the ‘real-world’ value of the research). One such attempt at this is the evidence-based movement. There is a very well developed ‘evidence-based medicine’ movement, predicated upon ‘trusted evidence, informed decisions leading to better health’. Similarly, there is the ‘evidence based policy’ movement, but it does not tend to have such snappy straplines and is blighted by more mundane concerns about the slippage (in policy research and practice) between ‘evidence-based policy’ and ‘policy based evidence’. In characterising this policy context, Paul Cairney, a policy scholar, states:
This is an apt description, and it speaks to the artificial boundaries I identified above. On one side advocates say those in the ‘real world’, need to pay more heed to our scientific way of doing things. On the other, there is a need to find ways which similarly bring the policy makers around to the researchers way of thinking. In other words to make academic knowledge more accessible. In both instances the prevailing emphasis is on getting ‘them’, out there in the ‘real world’, to come into the academic field. So, in one sense, this could be regarded as an attempt (a counter move) to maintain a degree of power and control over those that might impose external research value metrics.
At the end of the day, this approach does little to move us on. And here I use ‘us’ in the collective sense, to refer to us all, rather than to identify specific communities of people, such as academics or policy wonks. Instead there is a need to develop understanding in both fields, of process and practice and to develop ways in which they might be dovetailed with each other in much more mutual, less hierarchical, processes.
For example, (despite the impact agenda), there is clearly still a lack of understanding of academic evidence and the policy process and this speaks to misunderstanding and mis-characterisations from both sides. From the academic side, this involves a naïve reading of policy contexts. Policy is painted as something that can be relatively easily influenced, with no need to address questions of complexity, or other disciplinary and professional boundaries. The assumption demonstrates as Cairney has argued, that researchers are engaging with the policy process that they wish existed rather than the processes which actually exist. Policy in any social world does not exist in a vacuum but is predicated upon internal and external hierarchies of evidence, and processes and practices of influence, patronage and lobbying. These, and this should not need spelling out to social scientists, are far removed from two throw away lines in the conclusion of an academic paper stating that the preceding research has policy relevance and impelling policy makers to sit up and take notice (whoever those policymakers might be).
In terms of the policy, even if the academic evidence is of the highest possible standard, this does not necessarily guarantee that this evidence will be picked up by policy makers or politicians. If there is no political case for the policy, there will be little chance of the evidence being implemented. The point is that demonstrating value is not enough, there is also a clear need to demonstrate the political expediency of any proposed policy, (regardless of the evidence). It is in this context that we see policy-based evidence, rather than evidence based policy, where policymakers may make very selective use of the evidence in a way that supports their view and denigrates another view. For example, consider the reports about evidence used to inform government policy around fixed odds betting terminals (FOBTs) that emerged on Monday last week. Following the November 2018 budget which delayed changes to the maximum stake, cabinet minister Tracey Crouch resigned her post in government. The delay in reducing the stake from current levels was justified around concerns of 15,000-21,000 job losses. However, there were also allegations that a prominent pro-gambling MP had influenced the process. The story that broke initially last week was that the purported job losses were based on a report written for the gambling industry by the KPMG accountancy firm. KPMG said the report was “performed to meet specific terms of reference” agreed with the bookies’ trade body, adding that there were “particular features determined for the purposes of the engagement”. They continued that “the report should not therefore be regarded as suitable to be used or relied on by any other person or for any other purpose.” By Wednesday, the u-turn had been overridden (not sure what the phrase for doing a u-turn on a u-turn is) and the original position returned, apparently on foot of Crouch’s resignation and a threatened rebellion by MPs’. Whither questions of impact in this chaos?
It is in this context in which research evidence, policy and process all play out. One context is not better or worse than the other. A failure to engage with the dynamic complexity of evidence practices, policy practices and processes runs the risk of reifying the mischaracterisations of the fields of academic research and of the practical ‘real world’. In turn this reifies and maintains barriers between universities and a purportedly separate ‘real world’. We should be trying to remove as many barriers as possible so that universities are not seen as separate from the communities that support them and the wider social world.
Authors note: some of the material contained in this blog previously appeared in a co-authored piece, ‘Turning psychology into policy: a case of square pegs and round holes?‘, with Carl Walker and Danny Taggart, which was published under creative commons licence in Palgrave Communications.