In June this year, the Mayor of London committed to delivering ‘Vision Zero’ for road injuries. This initiative aims that deaths and serious injuries on the roads should be eliminated in London by 2041. That’s a bold aim: despite falling road collision rates, 116 people were fatally injured and 2,385 seriously injured on London’s roads in 2016.
It’s also, for some, a controversial aim. Vision Zero started in Sweden in 1994, as an approach to mobility which assumes that people are fallible, but that systems should not be. In an interview, Swedish safety scientist Matts-Åke Belin explained that initial opposition to Vision Zero came from two sides. First, transport economists saw safety as primarily a ‘cost-benefit’ calculation, and the costs of reducing fatalities to zero would outweigh the benefits. This is not simply a cold-hearted question of profits. There are potentially counter-productive social costs of investing in safety. If, for instance, the costs of making rail travel completely safe were that railways become too expensive to use, we would all switch to cars – making travel even less safe overall. The second source of opposition was the dominance of the ‘human factors’ approach – which starts from the assumption that the main causes of accidents are human error, and the solution, therefore, lies in changing behaviour. So people have to be altered to enable them to travel safely around busy, dangerous streets. They have to become model ‘mobile citizens’ – well informed, well trained, independent, and always alert. The responsibility for safety, in this view, lies predominantly with the most vulnerable road users.
But human ‘error’ is inevitable. We are all fallible, which is why accidents will happen. That operator error is ‘identified’ as the cause of 80% of all accidents was the point of departure for Charles’ Perrow’s classic book Normal Accidents. His counter-argument was that systems lead to accidents, not their operators. Published in 1984 in the aftermath of the Three Mile Island accident, Perrow’s theory was that ‘accidents’ are inevitable in all systems that are ‘tightly coupled’ and ‘interactively complex’: that is, in organisations or systems in which small errors can propagate quickly because the system does not tolerate delay, and the links between elements are not linear. Accidents emerge out of small inconsequential difficulties that set up hidden interactions that rapidly spread. Because people (and things) will go wrong, and unexpected interactions between component failures are inevitable, better safety engineering (or more training) is unlikely to ever design out all possible interactions: indeed, they may make the system even more complex, and thus riskier. ‘Normal accidents’ also have a significant degree of incomprehensibility. The operators fail to comprehend what is going on, as disaster unfolds, not because of poor training or cognitive failures but because these ‘normal accidents’ were unanticipated even by the systems’ designers. Perrow’s conclusion – that technologies such as nuclear power, where accidents can have catastrophic consequences, should be abandoned – has been debated ever since, with some considering it an overly pessimistic view.
The optimists instead looked at success. Their starting point was what became known as High-Reliability Organisations, such as air traffic control, which are tightly coupled and interactively complex with a potential for disasters, but where catastrophic accidents are (it is argued) rare. Looking at how and why some systems appear to be largely reliable, a different picture emerges. Here, researchers suggested that organisational cultures that have the right mix of adherence to protocols, learning from mistakes, adaptability and deference to expertise, can protect organisations from catastrophic failures. Rather than focus on success and efficiency, this theory goes, such organisations focus on reliability and develop ‘collective mindfulness’. In 1991, Todd LaPorte and Paula Consolini, part of the University of California, Berkeley team that undertook detailed fieldwork on High-Reliability Organisations, noted that many were in the public sector, and:
until recently, had relatively abundant resources, allowing them to invest heavily in reliability enhancing activities. This has nurtured an organizational perspective in which short-term efficiency has taken second seat to very high-reliability operations.
Whilst the specific findings relating to air traffic control or nuclear power are unlikely to be transferable in any simple way to less high-stakes organisations, what both the pessimists and the optimists agree on is the vital importance of understanding both organisational cultures and techno-social systems. To explain how risks become dangerous, and how best to manage them, requires first of all a societal conversation about what matters, and then an approach that focuses not on individuals or operators, but on systems. We cannot, perhaps, abandon motorised road transport – or at least the social costs might be too high to do so. But, as in Sweden, if we change the way we look at it from an economic or behavioural framing, to a system frame, then the problem changes. As Belin puts it the challenge is then: “let’s create a system for the humans instead of trying to adjust the humans to the system”. Vision Zero in Sweden was an outcome of this new systems way of thinking about transport; a civil justice frame, in which the system was adjusted to be more forgiving of human error.
Once the civil right to mobility is accepted, the aim becomes investing in ways of making sure that opportunities for collisions are reduced, and when inevitable accidents do happen, they are not catastrophic: they don’t result in serious or fatal injuries. So we can do things like reduce the speed of traffic in built up areas or change junctions to protect pedestrians, not prioritise drivers.
The Mayor of London should be congratulated for this step towards a civil justice approach to London’s transport system. We will never eliminate accidents, but it should be possible to treat everyday mobility in the city as a right to be enjoyed by all, not only by the more agile, assertive and alert among us.
Adopting Vision Zero is also a reminder to revisit some of the classic organisational work on risk and safety. In a context of austerity, it is easy to make political capital from reductions in health and safety monitoring, and from forcing yet more ‘improvements’ in efficiency in the public sector. Lines of accountability and responsibilities for safety then become very fragmented, with unclear ownership, and technical expertise becomes devalued and deprofessionalised, as organisations are forced to over-focus on efficiency rather than reliability. Both the ‘normal accidents’ theory, and the work on high reliability organisations, suggests we do this at our peril.
Risk and danger – time to change systems, not humans?
by Judy Green Nov 22, 2017In June this year, the Mayor of London committed to delivering ‘Vision Zero’ for road injuries. This initiative aims that deaths and serious injuries on the roads should be eliminated in London by 2041. That’s a bold aim: despite falling road collision rates, 116 people were fatally injured and 2,385 seriously injured on London’s roads in 2016.
It’s also, for some, a controversial aim. Vision Zero started in Sweden in 1994, as an approach to mobility which assumes that people are fallible, but that systems should not be. In an interview, Swedish safety scientist Matts-Åke Belin explained that initial opposition to Vision Zero came from two sides. First, transport economists saw safety as primarily a ‘cost-benefit’ calculation, and the costs of reducing fatalities to zero would outweigh the benefits. This is not simply a cold-hearted question of profits. There are potentially counter-productive social costs of investing in safety. If, for instance, the costs of making rail travel completely safe were that railways become too expensive to use, we would all switch to cars – making travel even less safe overall. The second source of opposition was the dominance of the ‘human factors’ approach – which starts from the assumption that the main causes of accidents are human error, and the solution, therefore, lies in changing behaviour. So people have to be altered to enable them to travel safely around busy, dangerous streets. They have to become model ‘mobile citizens’ – well informed, well trained, independent, and always alert. The responsibility for safety, in this view, lies predominantly with the most vulnerable road users.
But human ‘error’ is inevitable. We are all fallible, which is why accidents will happen. That operator error is ‘identified’ as the cause of 80% of all accidents was the point of departure for Charles’ Perrow’s classic book Normal Accidents. His counter-argument was that systems lead to accidents, not their operators. Published in 1984 in the aftermath of the Three Mile Island accident, Perrow’s theory was that ‘accidents’ are inevitable in all systems that are ‘tightly coupled’ and ‘interactively complex’: that is, in organisations or systems in which small errors can propagate quickly because the system does not tolerate delay, and the links between elements are not linear. Accidents emerge out of small inconsequential difficulties that set up hidden interactions that rapidly spread. Because people (and things) will go wrong, and unexpected interactions between component failures are inevitable, better safety engineering (or more training) is unlikely to ever design out all possible interactions: indeed, they may make the system even more complex, and thus riskier. ‘Normal accidents’ also have a significant degree of incomprehensibility. The operators fail to comprehend what is going on, as disaster unfolds, not because of poor training or cognitive failures but because these ‘normal accidents’ were unanticipated even by the systems’ designers. Perrow’s conclusion – that technologies such as nuclear power, where accidents can have catastrophic consequences, should be abandoned – has been debated ever since, with some considering it an overly pessimistic view.
The optimists instead looked at success. Their starting point was what became known as High-Reliability Organisations, such as air traffic control, which are tightly coupled and interactively complex with a potential for disasters, but where catastrophic accidents are (it is argued) rare. Looking at how and why some systems appear to be largely reliable, a different picture emerges. Here, researchers suggested that organisational cultures that have the right mix of adherence to protocols, learning from mistakes, adaptability and deference to expertise, can protect organisations from catastrophic failures. Rather than focus on success and efficiency, this theory goes, such organisations focus on reliability and develop ‘collective mindfulness’. In 1991, Todd LaPorte and Paula Consolini, part of the University of California, Berkeley team that undertook detailed fieldwork on High-Reliability Organisations, noted that many were in the public sector, and:
Whilst the specific findings relating to air traffic control or nuclear power are unlikely to be transferable in any simple way to less high-stakes organisations, what both the pessimists and the optimists agree on is the vital importance of understanding both organisational cultures and techno-social systems. To explain how risks become dangerous, and how best to manage them, requires first of all a societal conversation about what matters, and then an approach that focuses not on individuals or operators, but on systems. We cannot, perhaps, abandon motorised road transport – or at least the social costs might be too high to do so. But, as in Sweden, if we change the way we look at it from an economic or behavioural framing, to a system frame, then the problem changes. As Belin puts it the challenge is then: “let’s create a system for the humans instead of trying to adjust the humans to the system”. Vision Zero in Sweden was an outcome of this new systems way of thinking about transport; a civil justice frame, in which the system was adjusted to be more forgiving of human error.
Once the civil right to mobility is accepted, the aim becomes investing in ways of making sure that opportunities for collisions are reduced, and when inevitable accidents do happen, they are not catastrophic: they don’t result in serious or fatal injuries. So we can do things like reduce the speed of traffic in built up areas or change junctions to protect pedestrians, not prioritise drivers.
The Mayor of London should be congratulated for this step towards a civil justice approach to London’s transport system. We will never eliminate accidents, but it should be possible to treat everyday mobility in the city as a right to be enjoyed by all, not only by the more agile, assertive and alert among us.
Adopting Vision Zero is also a reminder to revisit some of the classic organisational work on risk and safety. In a context of austerity, it is easy to make political capital from reductions in health and safety monitoring, and from forcing yet more ‘improvements’ in efficiency in the public sector. Lines of accountability and responsibilities for safety then become very fragmented, with unclear ownership, and technical expertise becomes devalued and deprofessionalised, as organisations are forced to over-focus on efficiency rather than reliability. Both the ‘normal accidents’ theory, and the work on high reliability organisations, suggests we do this at our peril.