When faced with a human error problem, you may be tempted to ask 'Why didn't they watch out better? How could they not have noticed?'. You think you can solve your human error problem by telling people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure. These are all expressions of 'The Bad Apple Theory', where you believe your system is basically safe if it were not for those few unreliable people in it. This old view of human error is increasingly outdated and will lead you nowhere. The new view, in contrast, understands that a human error problem is actually an organizational problem. Finding a 'human error' by any other name, or by any other human, is only the beginning of your journey, not a convenient conclusion. The new view recognizes that systems are inherent trade-offs between safety and other pressures (for example: production). People need to create safety through practice, at all levels of an organization. Breaking new ground beyond its successful predecessor, The Field Guide to Understanding Human Error guides you through the traps and misconceptions of the old view. It explains how to avoid the hindsight bias, to zoom out from the people closest in time and place to the mishap, and resist the temptation of counterfactual reasoning and judgmental language. But it also helps you look forward. It suggests how to apply the new view in building your safety department, handling questions about accountability, and constructing meaningful countermeasures. It even helps you in getting your organization to adopt the new view and improve its learning from failure. So if you are faced by a human error problem, abandon the fallacy of a quick fix. Read this book.
Author: David D. Woods
Publisher: CRC Press
Release Date: 2017-09-18
Genre: Technology & Engineering
Human error is cited over and over as a cause of incidents and accidents. The result is a widespread perception of a 'human error problem', and solutions are thought to lie in changing the people or their role in the system. For example, we should reduce the human role with more automation, or regiment human behavior by stricter monitoring, rules or procedures. But in practice, things have proved not to be this simple. The label 'human error' is prejudicial and hides much more than it reveals about how a system functions or malfunctions. This book takes you behind the human error label. Divided into five parts, it begins by summarising the most significant research results. Part 2 explores how systems thinking has radically changed our understanding of how accidents occur. Part 3 explains the role of cognitive system factors - bringing knowledge to bear, changing mindset as situations and priorities change, and managing goal conflicts - in operating safely at the sharp end of systems. Part 4 studies how the clumsy use of computer technology can increase the potential for erroneous actions and assessments in many different fields of practice. And Part 5 tells how the hindsight bias always enters into attributions of error, so that what we label human error actually is the result of a social and psychological judgment process by stakeholders in the system in question to focus on only a facet of a set of interacting contributors. If you think you have a human error problem, recognize that the label itself is no explanation and no guide to countermeasures. The potential for constructive change, for progress on safety, lies behind the human error label.
While many organizations see the value of creating a just culture they struggle when it comes to developing it. In this Second Edition, Dekker expands his views, additionally tackling the key issue of how justice is created inside organizations. Dekker also introduces new material on ethics and on caring for the' second victim' (the professional at the centre of the incident). Consequently, we have a natural evolution of the author's ideas.
Major accidents are rare events due to the many barriers, safeguards and defences developed by modern technologies. But they continue to happen with saddening regularity and their human and financial consequences are all too often unacceptably catastrophic. One of the greatest challenges we face is to develop more effective ways of both understanding and limiting their occurrence. This lucid book presents a set of common principles to further our knowledge of the causes of major accidents in a wide variety of high-technology systems. It also describes tools and techniques for managing the risks of such organizational accidents that go beyond those currently available to system managers and safety professionals. James Reason deals comprehensively with the prevention of major accidents arising from human and organizational causes. He argues that the same general principles and management techniques are appropriate for many different domains. These include banks and insurance companies just as much as nuclear power plants, oil exploration and production companies, chemical process installations and air, sea and rail transport. Its unique combination of principles and practicalities make this seminal book essential reading for all whose daily business is to manage, audit and regulate hazardous technologies of all kinds. It is relevant to those concerned with understanding and controlling human and organizational factors and will also interest academic readers and those working in industrial and government agencies.
Ten Questions About Human Error asks the type of questions frequently posed in incident and accident investigations, people's own practice, managerial and organizational settings, policymaking, classrooms, Crew Resource Management Training, and error research. It is one installment in a larger transformation that has begun to identify both deep-rooted constraints and new leverage points of views of human factors and system safety. The ten questions about human error are not just questions about human error as a phenomenon, but also about human factors and system safety as disciplines, and where they stand today. In asking these questions and sketching the answers to them, this book attempts to show where current thinking is limited--where vocabulary, models, ideas, and notions are constraining progress. This volume looks critically at the answers human factors would typically provide and compares/contrasts them with current research insights. Each chapter provides directions for new ideas and models that could perhaps better cope with the complexity of the problems facing human error today. As such, this book can be used as a supplement for a variety of human factors courses.
The second edition of a bestseller, Safety Differently: Human Factors for a New Era is a complete update of Ten Questions About Human Error: A New View of Human Factors and System Safety. Today, the unrelenting pace of technology change and growth of complexity calls for a different kind of safety thinking. Automation and new technologies have resulted in new roles, decisions, and vulnerabilities whilst practitioners are also faced with new levels of complexity, adaptation, and constraints. It is becoming increasingly apparent that conventional approaches to safety and human factors are not equipped to cope with these challenges and that a new era in safety is necessary. In addition to new material covering changes in the field during the past decade, the book takes a new approach to discussing safety. The previous edition looked critically at the answers human factors would typically provide and compared/contrasted them with current research and insights at that time. The edition explains how to turn safety from a bureaucratic accountability back into an ethical responsibility for those who do our dangerous work, and how to embrace the human factor not as a problem to control, but as a solution to harness. See What’s in the New Edition: New approach reflects changes in the field Updated coverage of system safety and technology changes Latest human factors/ergonomics research applicable to safety Organizations, companies, and industries are faced with new demands and pressures resulting from the dynamics and nature of the modern marketplace and from the development and introduction of new technologies. This new era calls for a different kind of safety thinking, a thinking that sees people as the source of diversity, insight, creativity, and wisdom about safety, not as the source of risk that undermines an otherwise safe system. It calls for a kind of thinking that is quicker to trust people and mistrust bureaucracy, and that is more committed to actually preventing harm than to looking good. This book takes a forward-looking and assertively progressive view that prepares you to resolve current safety issues in any field.
Situations and systems are easier to change than the human condition - particularly when people are well-trained and well-motivated, as they usually are in maintenance organisations. This is a down-to-earth practitioner’s guide to managing maintenance error, written in Dr. Reason’s highly readable style. It deals with human risks generally and the special human performance problems arising in maintenance, as well as providing an engineer’s guide for their understanding and the solution. After reviewing the types of error and violation and the conditions that provoke them, the author sets out the broader picture, illustrated by examples of three system failures. Central to the book is a comprehensive review of error management, followed by chapters on:- managing person, the task and the team; - the workplace and the organization; - creating a safe culture; It is then rounded off and brought together, in such a way as to be readily applicable for those who can make it work, to achieve a greater and more consistent level of safety in maintenance activities. The readership will include maintenance engineering staff and safety officers and all those in responsible roles in critical and systems-reliant environments, including transportation, nuclear and conventional power, extractive and other chemical processing and manufacturing industries and medicine.
How do people cope with having "caused" a terrible accident? How do they cope when they survive and have to live with the consequences ever after? We tend to blame and forget professionals who cause incidents and accidents, but they are victims too. They are second victims whose experiences of an incident or adverse event can be as traumatic as that of the first victims’. Yet information on second victimhood and its relationship to safety, about what is known and what organizations might need to do, is difficult to find. Thoroughly exploring an emerging topic with great relevance to safety culture, Second Victim: Error, Guilt, Trauma, and Resilience examines the lived experience of second victims. It goes through what we know about trauma, guilt, forgiveness, and injustice and how these might be felt by the second victim. The author discusses how to conduct investigations of incidents that do not alienate second victims or make them feel even worse. It explores the importance support and resilience and where the responsibilities for creating it may lie. Drawing on his unique background as psychologist, airline pilot, and safety specialist, and his own experiences with helping second victims from a variety of backgrounds, Sidney Dekker has written a powerful, moving account of the experience of the second victim. It forms compelling reading for practitioners, risk managers, human resources managers, safety experts, mental health workers, regulators, the judiciary, and many other professionals. Dekker provides a strong theoretical background to promote understanding of the situation of the second victim and solid practical advice about how to deal with trauma that continues after an event leading to preventable harm or even avoidable death of a patient, consumer, or colleague. Listen to Sidney Dekker speak about his book
Author: Professor James Reason
Publisher: Ashgate Publishing, Ltd.
Release Date: 2013-11-01
Genre: Political Science
This succinct but absorbing book covers the main way stations on James Reason’s 40-year journey in pursuit of the nature and varieties of human error. He presents an engrossing and very personal perspective, offering the reader exceptional insights, wisdom and wit as only James Reason can. A Life in Error charts the development of his seminal and hugely influential work from its original focus on individual cognitive psychology through the broadening of scope to embrace social, organizational and systemic issues.
What does the collapse of sub-prime lending have in common with a broken jackscrew in an airliner’s tailplane? Or the oil spill disaster in the Gulf of Mexico with the burn-up of Space Shuttle Columbia? These were systems that drifted into failure. While pursuing success in a dynamic, complex environment with limited resources and multiple goal conflicts, a succession of small, everyday decisions eventually produced breakdowns on a massive scale. We have trouble grasping the complexity and normality that gives rise to such large events. We hunt for broken parts, fixable properties, people we can hold accountable. Our analyses of complex system breakdowns remain depressingly linear, depressingly componential - imprisoned in the space of ideas once defined by Newton and Descartes. The growth of complexity in society has outpaced our understanding of how complex systems work and fail. Our technologies have gotten ahead of our theories. We are able to build things - deep-sea oil rigs, jackscrews, collateralized debt obligations - whose properties we understand in isolation. But in competitive, regulated societies, their connections proliferate, their interactions and interdependencies multiply, their complexities mushroom. This book explores complexity theory and systems thinking to understand better how complex systems drift into failure. It studies sensitive dependence on initial conditions, unruly technology, tipping points, diversity - and finds that failure emerges opportunistically, non-randomly, from the very webs of relationships that breed success and that are supposed to protect organizations from disaster. It develops a vocabulary that allows us to harness complexity and find new ways of managing drift.
The consideration of human factors issues is vital to the mining industry. As in other safety-critical domains, human performance problems constitute a significant threat to system safety, making the study of human factors an important field for improving safety in mining operations. The primary purpose of this book is to provide the reader with a much-needed overview of human factors within the mining industry, in particular to understand the role of human error in mine safety, explaining contemporary risk management and safety systems approaches. The approach taken is multidisciplinary and holistic, based on a model of the systems of work in the mining industry domain. The ingredients in this model include individual operators, groups/teams, technology/equipment, work organisation and the physical environment. Throughout the book, topics such as human error and safety management are covered through the use of real examples and case studies, allowing the reader to see the practical significance of the material presented while making the text rigorous, useful and enjoyable. Understanding Human Error in Mine Safety is written for professionals in the field, researchers and students of mining engineering, safety or human factors.
This book is a set of new skills written for the managers that drive safety in their workplace. This is Human Performance theory made simple. If you are starting a new program, revamping an old program, or simply interested in understanding more about safety performance, this guide will be extremely helpful.
Society at large tends to misunderstand what safety is all about. It is not just the absence of harm. When nothing bad happens over a period of time, how do you know you are safe? In reality, safety is what you and your people do moment by moment, day by day to protect assets from harm and to control the hazards inherent in your operations. This is the purpose of risk-based thinking, the key element of the six building blocks of Human and Organizational Performance (H&OP). Generally, H&OP provides a risk-based approach to managing human performance in operations. But, specifically, risk-based thinking enables foresight and flexibility—even when surprised—to do what is necessary to protect assets from harm but also achieve mission success despite ongoing stresses or shocks to the operation. Although you cannot prepare for every adverse scenario, you can be ready for almost anything. When risk-based thinking is integrated into the DNA of an organization’s way of doing business, people will be ready for most unexpected situations. Eventually, safety becomes a core value, not a priority to be negotiated with others depending on circumstances. This book provides a coherent perspective on what executives and line managers within operational environments need to focus on to efficiently and effectively control, learn, and adapt.
This book explores the human contribution to the reliability and resilience of complex, well-defended systems. Usually the human is considered a hazard - a system component whose unsafe acts are implicated in the majority of catastrophic breakdowns. However there is another perspective that has been relatively little studied in its own right - the human as hero, whose adaptations and compensations bring troubled systems back from the brink of disaster time and again. What, if anything, did these situations have in common? Can these human abilities be ’bottled’ and passed on to others? The Human Contribution is vital reading for all professionals in high-consequence environments and for managers of any complex system. The book draws its illustrative material from a wide variety of hazardous domains, with the emphasis on healthcare reflecting the author’s focus on patient safety over the last decade. All students of human factors - however seasoned - will also find it an invaluable and thought-provoking read.