Prevention of Accidents in Transportation Research Paper

This sample Prevention of Accidents in Transportation Research Paper is published for educational and informational purposes only. Free research papers are not written by our writers, they are contributed by users, so we are not responsible for the content of this free sample paper. If you want to buy a high quality research paper on any topic at affordable price please use custom research paper writing services.

Abstract

Transportation  accidents carry great personal, organizational, societal, and economic costs. Applied psychology can derive research  inspiration  from, and can   contribute    much   to   the   understanding    and eventual prevention of, accidents in all kinds of transportation modes. The applied psychological questions surrounding  accidents can vary widely, from issues of perception in an individual driver, to trade-offs made  in  complex  organizations   under   pressures   of scarcity  and  competition,  to  biases and  heuristics  in our  understanding   of culpability,  cause,  and  control. Further  progress  on  safety in  transportation systems hinges in part on contributions from applied psychology.

Outline

  1. Why Is It Important to Understand  Accidents?
  2. Applied Psychology and Accidents
  3. Why Do Accidents Happen?
  4. Regulation of Transportation Systems and Incident Reporting
  5. Interventions, New Technology, and Future Developments

1. Why Is It Important  To Understand Accidents?

Transportation  accidents carry tremendous  social, environmental,  and  economic  costs.  Traffic accidents  are among the leading causes of death for people under 34 years of age. In 2002 alone, 42,815 people died in highway accidents in the United States; that equates to more than  117  fatalities per  day or  1.51  fatalities per  100 million vehicle miles traveled. The number  of injuries hovers around 3 million per year. The economic costs of these accidents are enormous—more  than $230 billion per year for the United States alone. Survivors of transportation  accidents often require treatment  and rehabilitation and also suffer psychological consequences that may carry over into work and social settings.

The nature and frequency of accidents differ widely among the various transportation modes. For example, the year 1999 witnessed 51 major  accidents  in commercial aviation and 2768 train-related  accidents that were responsible for 932 casualties. At the same time, safety of the various transportation modes varies with geographical location. For example, in terms of casualties, passenger shipping may be less safe than driving in  parts  of Southeast  Asia, and  flying may be more dangerous  in certain parts of Africa. Indeed, it would seem that  the  phrase  ‘‘richer is safer’’  is true  except when considering road traffic accidents in heavily populated,  affluent  parts  of  world.  All the  same,  a transportation accident  is one  of the  most  common risks to which any member of society is exposed.

2. Applied Psychology And Accidents

Transportation  accidents are interesting from an applied psychology perspective because they are, most basically, about human behavior in applied settings. In fact, accidents are typically about human cognition as much as they are about  technical  problems  and  issues. The applied psychological questions  surrounding  accidents can vary widely, from issues of sensation and perception in an individual driver to shortcuts  made by organizational members under pressures of scarcity and competition. Applied psychology as a field of scientific inquiry can use its insights for, and derive much empirical inspiration from, transportation accidents. Indeed, applied  psychology  in  the  broadest,   most  inclusive sense is relevant to transportation accidents.

In studying whether  a car driver could have seen a pedestrian,  applied  psychology  may be called  on  to assess  the  effect  of  atmospheric   modulation   (e.g., rain,  mist, dust)  on  stimulus  perceptibility  (e.g., the pedestrian).  Models exist to calculate so-called atmospheric modulation  transfer that can help applied psychologists to reconstruct  the existing visibility during a particular  occurrence.  Additional questions  may arise when substance abuse is suspected  on the part of the driver; in such a case, applied psychology relies more on models of psychopathology.  Much research  leverage exists in gauging the (sub)cultural acceptability of consuming  alcohol  before engaging in transportation activities. Other questions with relevance to applied psychology arise when notable mismatches occur in people’s risk perceptions.  For example, people might not want to use the railroads  due to one highly publicized recent fatal accident. If they prefer to drive their cars, they actually expose themselves to the risk of an accident (perhaps  even a fatal accident) several orders of magnitude larger than that which would exist while riding the train.  Such individual  risk assessments are deeply confounded, mixing personal histories, notions of control and destiny, and various kinds of biases and rationalizations  (e.g., the availability heuristic)  into a rich trove for applied psychology.

Perhaps even more complex are questions surrounding the trade-offs made by professionals who work in transportation systems, leading to issues such as ‘‘safety cultures’’ in transportation systems. Does the commander of the  aircraft continue  with  the  approach  in bad weather or not? Does the master of the ship sail in the storm  or not? Such trade-offs are more than  the risk assessments of individual  decision makers; they must also be understood  as expressions  of the preferences and priorities of entire sociotechnical systems that operate  in  environments   of  scarcity  (e.g.,  only  so many  passengers,  only  so  much  money  to  be  had) and competition  (e.g., somebody else is always ready to take over one’s routes). In efforts to investigate these kinds of trade-offs and other applied problems, psychology has seen a methodological  shift accelerate over the past decade or so, with an increasing emphasis on  fieldwork  and  the  study  of applied  settings.  It is believed that to understand  decision making in actual (transportation) work, where real decisions have real outcomes  for  real  people,  psychological  researchers must get out of the laboratory and investigate applied settings  directly.  Issues  of  confounding   factors  are dealt  with  through  intensive  analysis and  interpretation  of research  results,  leading to findings (e.g., on naturalistic  decision making)  that  are both  internally valid and exportable to other applied settings.

There is also a role for applied psychology in understanding  issues  of culpability  and  control.  To  what extent, and why, do we judge participants  in transportation (e.g., drivers, pilots) to be culpable for the accidents that they ‘‘cause’’? Such questions are deeply complex and touch on much of what people hope and believe about the world in which they live. The questions are interconnected  with yet another set of biases, including   the  ‘‘hindsight  bias’’ that   describes  how knowledge of outcome profoundly alters one’s perception of the behavior and intentions  that led up to that outcome. The hindsight bias also allows one to convert a complex  tangled  history  leading up  to an accident into   a   simple   series   of   binary   decisions,   where participants  had clear choices to do the right thing or the wrong thing. These questions also inevitably coincide with assumptions  about causation, whether justified or not.  These tend  to be quite  problematic  and affect  applied  psychological  reasoning  around   accidents and human intention  and behavior.

3. Why Do Accidents Happen?

One   of  the   more   compelling   questions   for  those involved in transportation is why accidents happen. Indeed,  a pressing  issue after an accident  is often  to resolve the question,  ‘‘What  was the cause?’’  There is, of course, no one or unequivocal answer, and not just because the various modes of transportation may differ widely in their respective etiologies of failure. Models of accident  causation  develop  continually,  reflecting not only new insights or access to accident  data but  also the general scientific spirit of the times. In transportation, the so-called ‘‘chain of events’’ model is popular. In this model, one failure somewhere in the system can be seen to lead to (or trigger) the next, and so on, until this cascade of individually  insignificant  faults pushes  the entire system over the edge of breakdown. For example, a piece of debris on the runway, left there by a preceding aircraft, manages to puncture  the tire of a subsequent aircraft. Tire fragments slam into the wing, puncturing the fuel tank and triggering a fire that, not much later, brings  down  the  entire  aircraft.  The  chain  of events model is also credited  with distributing  or refocusing the search for accident causes away from frontline operators (i.e., away from ‘‘operator error’’), instead identifying higher up supervisory and organizational shortcomings that may have contributed.

However, recent insight into the sociotechnical nature of large transportation accidents, such as disasters involving the Space Shuttles Challenger (in 1986) and Columbia (in 2003), has exposed the limits of the chain of events model. Most critically, the model presupposes (or even requires)  failures so as to cause a failure. This contradicts characterizations  of organizational and operational practice preceding accidents as normal everyday routine. Accidents can still happen even if everybody follows the rules and there  are no ‘‘failures’’ or ‘‘shortcomings’’ as seen from the perspective of those running and  regulating  the  transportation  system.  As Perrow suggested in 1984, accidents  in these systems may be quite ‘‘normal’’ given their structural  properties of interactive complexity and tight coupling.  Nothing  extraordinary (i.e., no obvious breakdowns or failures) as seen

from inside the system is necessary to produce an accident. For example, aviation is a tightly coupled system, as are  railroads.  This  means  that  there  is little  slack  to recover if things do start to go wrong. The workings of a railroad system, however, are generally more linear and transparent than those of aviation, with the former allowing for better insight into how to manage the system.

These  insights  have  put  pressure  on  the  old  label ‘human error’’ as an explanation  for accidents. Indeed, human error today can no longer be legitimately seen as the cause of accidents; rather, it is seen as an effect, as a symptom,  or as a sign of trouble  deeper  inside  the system.  Human  error   is  no  longer  an  explanation; instead,  it  demands  one.  Human  error,  to the  extent that it exists as a separable category of human  performance  (which  many  researchers  doubt  or  deny),  is systematically  connected  to  features  of people’s  tools and tasks. This is consistent with the ecological commitment in much of applied psychology, where an analysis of agent–environment mutuality, or person–context couplings, is critical to producing  useful insights into the success and failure of transportation (and other) systems. The latest accident models deemphasize ‘‘cause’’ altogether,  noting  that  it is a deeply Newtonian  concept that might not carry over well into an understanding  of why  complex  transportation  systems  fail,  with  the Columbia accident being a case in point. These models see  transportation  accidents  rather   as  control  problems;  as  the  gradual  erosion  and  eventual  loss  of control over a safety-critical process, where safety constraints   on  design  or  operation   are  violated.  Such models  can be reconciled  better  with  increasing  evidence  that  large  transportation accidents  are  nearly invariably preceded by some kind of slow but gradual drift  (e.g., away from procedures,  away from design specifications) that is hard to notice or characterize as deviant  when seen from the inside, that  is, from the perspective of participants  themselves. The consistent, if slight, speeding that most male drivers exhibit (e.g., nearly always 5 miles per hour over the speed limit) is another  example of this. The deviance is normalized; the nonroutine becomes routine,  and there  is consistent noncompliance.  Production  pressures  (e.g., pressure to be on time, pressure to maximize capacity use) form the major engine behind such drift toward failure in nearly all transportation systems because virtually no transportation system is immune to the pressures of competition   and  resource  scarcity.  Recent  research debates  are  not  necessarily  about  the  existence  or reality of such pressures; rather,  they discuss to what extent   these   pressures   consciously   affect  people’s trade-offs and to what extent people deliberately gamble (and lose if they have an accident). Much research points to the prerational insidious nature of production pressures  on  people’s trade-offs  rather  than  to  conscious immoral or risky calculation, although male drivers under 25 years of age might be an exception.

Acknowledging that such sociotechnical complexity underlies most transportation accidents means that safety and risk are really social constructs  rather  than objective engineering measurements.  Risk is constructed at the intersection  of social forces and technical knowledge  (or  the lack thereof).  Slogans such as ‘‘safety comes first’’ are mere posturing.  They are disconnected from reality where risk is negotiated as subjective social activity and where it will be different in different cultures, different among organizations within a single culture, and even different among subcultures  within one organization.

4. Regulation Of Transportation  Systems And Incident Reporting

Most, if not  all, transportation systems are regulated to some extent. Regulation means that rules are created to govern practice and that  the rules are enforced  to generate compliance. This often requires large organizations—nearly always public or government agencies—tasked  with  issuing  rules,  overseeing  practice, and certifying and checking systems, operators, and organizations. The extent of regulation of various transportation modes  varies greatly  from  country  to country.  In  some  countries,  regulators  are  criticized for having too close a relationship  with the transportation industries  they are supposed  to regulate and for having a dual  and  supposedly  incompatible  mandate (i.e., promoting  the use and growth of the transportation  system as well as overseeing and monitoring  its compliance).

The success of safety regulation depends on the safety level   that   the   transportation   system   has   already achieved.  In  relatively  unsafe  transportation  modes (e.g., private flying), regulation can pay great and rather immediate dividends. It can help to standardize practice; it can issue, broadcast,  and  enforce rules and remind operators of them; and it can help to build a corpus of cases on routes to accidents that can be shared in the community. There is often leverage in changing designs (e.g.,  operator  interfaces)  to  make  them  more  error resistant and error tolerant. In safer systems (e.g., charter flights), such error-resistant  designs have typically evolved further.  Accidents are  often  preceded  by socalled ‘‘dress  rehearsals’’—sequences  of events similar to real accidents but without  fatal outcomes. Learning from those dress rehearsals is encouraged, particularly through incident-reporting systems. However, in ultrasafe transportation systems (e.g., European railroads), overregulation  becomes  a  problem.  More  rules  and more monitoring  are no longer accompanied by safety gains; they serve only to increase system complexity and potentially  decrease transparency  as well as rule compliance. Incident-reporting systems in ultra-safe transportation systems may be of limited  value. The typical accident there emerges from routine,  everyday banal factors that combine and align in ways that are hard to foresee. Incident reports  would neither  notice the  relevance  of those  factors nor  be able to  subsequently  project   their  interplay.   This,  in  fact,  is  a problem allied with all incident-reporting systems. Reporting is not the same as analysis, which in turn is not  the  same as learning  from potential  failure.  Just having an incident  reporting  system in place guarantees little in terms of progress on safety.

5. Interventions, New Technology, And Future Developments

Prevention and improvement  strategies vary considerably from one transportation mode to another because many of the factors that affect accident likelihood are mode specific. In general,  prevention  strategies  seek either to minimize the likelihood  of a particular  kind of harmful  event  (e.g.,  a  crash)  or  to  minimize  its impact (in terms of property damage, injury, or environmental pollution).  New technology often plays a dominant  role here. Automatic seat belt systems, side impact protection systems, collision detection and avoidance  systems,  anti-lock  breaking  systems,  double-hull vessels, automated cocoons to keep an aircraft within  its  prescribed  envelope—all  of these  systems intervene, prevent, and protect in one sense or another. Even though enormous progress is made through such interventions,  technology is sometimes a mixed blessing. Airbags may protect  mostly drivers who do not wear seat belts (forcing a large majority of other people, who do wear seat belts, to pay a premium on their cars),  and  automation  in  virtually  all transportation modes has been associated with the emergence of new human–machine coordination problems (e.g., mode errors,  ‘‘automation surprises,’’ people  getting lost in display  page  architectures).   Indeed,  automation   has been associated  with  the  emergence  of new types of accidents in virtually all transportation modes, where it has introduced  new capabilities and new complexities. One typical signature of accidents in automated  transportation systems is the ‘‘going sour’’ scenario, where a small trigger event (i.e., an unusual  occurrence  or a small failure in a system somewhere), itself innocuous, leads to a series of misassessments and miscommunications between humans  and computers.  The system is managed into greater hazard due to coordination breakdowns  among the various players (both  humans and machines).  The end result is often an automation surprise,   where   humans   discover   that   what   they thought they had instructed  the automation  to do was not   quite   what   the   automation   was  doing.   More research,  much  of it  in  applied  psychology,  will be necessary    to    use    new    technology    and    other interventions to their full potential in helping to reduce transportation accidents.

References:

  1. Amalberti, R. (2002). The paradoxes  of almost  totally safe transportation systems. Safety Science, 37, 109–126.
  2. Billings, C. E. (1996). Aviation automation: The search for a human centered approach. Mahwah, NJ: Lawrence Erlbaum.
  3. Dekker, W.  A. (2002).  The field guide to human error investigations. Bedford, UK: Cranfield University Press.
  4. Perrow, (1984).  Normal accidents: Living with high-risk technologies. New York: Basic Books.
  5. Vaughan, D. (1996). The Challenger  launch decision: Risky technology, culture, and deviance at NASA. Chicago: University of Chicago Press.

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to order a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655