Human Factors Research Paper

This sample Human Factors Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of psychology research paper topics, and browse research paper examples.

Human factors is an applied discipline of psychology that is concerned with the interactions between humans and machines. Alphonse Chapanis (1985), one of the founders of the discipline, said, “Human Factors discovers and applies information about human abilities, limitations and characteristics to the design of tools, machines, systems, tasks, jobs and environments for safe, comfortable and effective human use” (p. 2). Put more simply, human factors is concerned with the consideration of people in the design of products, services, and systems. Unlike the traditional disciplines of psychology that are typically focused on specific human behaviors and capabilities alone, human factors is primarily concerned with how these behaviors and capabilities limit human performance in the real world, either in task performance or in the design and use of machine interfaces. For example, a perceptual psychologist may be interested in determining contrast sensitivity functions for the human visual system after it is exposed to bright flashes of varying intensity. From this data, the perceptual psychologist might theorize about the underlying structure and function of the components of the visual system that might be responsible for the observed results. In contrast, the human factors psychologist might use the resulting data to determine what a driver would be able to see after a blinding flash from an oncoming car’s headlights in order to design roadway signs that could be seen under these kinds of conditions. The human factors psychologist does not eschew basic research, but rather maintains a focus on the practical application of that data to the solution of problems in which the performance of the human is an integral component.

It is important to note that, contrary to popular belief, there are several things that human factors is not. First, it is not simply the application of common sense. Many of the examples used to illustrate common human factors problems (i.e., which way to turn a faucet handle to start water flow) give the appearance of common sense, and violations are seen as oversights on the part of the designer. Most problems in human factors are not common sense at all—the force required to turn an emergency exit handle or the minimum frequency difference in two auditory signals required to insure detectability are but two examples that illustrate that human factors, like general psychology, is a data-driven endeavor. In fact, even the faucet may not be as common sense as it seems—if turning the handle in a clockwise fashion is the common sense way to close a valve, then why does a seemingly more critical system, such as a natural gas fitting, work the opposite way? The second misconception about human factors is that it is essentially the application of guidelines and checklists. To be certain, as the body of human factors knowledge has grown, the codified data have become represented in guidelines that aid the designer in making decisions. However, much of the work performed by human factors professionals involves the collection of data that are not yet codified, and must be discovered as part of a rigorous research effort. Finally, developers will often contend that they are humans, so if they can understand and use a system they have built, then it must be adhering to human factors design principles. In this case, the developer has accounted for only a single user in a highly varied population, so this assessment is incorrect. Even if the developer will be the only user of the system, it is unlikely that the system has been designed to insure accurate operation under conditions of operator stress and fatigue, or under emergency operation.

Human factors is typically concerned with the development or analysis of systems. From the viewpoint of human factors, modern systems comprise three components: hardware, software, and people. In most modern systems, the human is the most unreliable and unpredictable component of the system. Human factors looks at problems that occur where the human part of the system interacts with the hardware and software portions of the system. It is these boundaries, or interfaces, that must be perfected to insure that the human and the machine (be it either hardware or software) can effectively communicate with each other. The system needs to be able to communicate to the human in forms that are easily detectable and that can be easily interpreted by the human. In turn, the human must then be able to quickly and accurately control the system by using the correct input. This input must be recognizable by the machine, but more important, it must be of a form that makes it easy and intuitive for the human to provide this input.

Human factors is often divided into two distinct areas. The first of these is ergonomics. Ergonomics is primarily concerned with the physical size, performance, and limitations of the human body. Ergonomics focuses on how the physical attributes of the human body impact the ability of a person to perform a task. This can be as simple as determining the appropriate size and spacing of buttons to use on a television remote control so that only one button is pushed at a time, or as complex as describing the kinematic and dynamic properties of the human body in order to design adequate automobile restraint systems. Ergonomics is often associated with the transportation industry (can the driver or pilot see and reach all of the controls?) and office work space design, particularly in the design of chairs, desks, and computer input devices. This definition of ergonomics is strictly a North American view, however. Elsewhere, the terms human factors and ergonomics are generally thought of as being synonymous.

The other area of human factors is engineering psychology. Also referred to as cognitive ergonomics, engineering psychology is focused on the behaviors of individuals and how those behaviors impact system performance. This broad definition means that issues such as cognition, perception, and training all fall into this area. Although most of the early work in engineering psychology was focused on the interaction of the human with physical controls and displays, much of the current work is concerned with various aspects of human-computer interaction (HCI).

These two areas are reflected in the demographic makeup of those professionals practicing human factors and in the institutions that train them. According to the National Research Council, 52 percent of the graduate programs in human factors were primarily affiliated with engineering departments, 42 percent had affiliation with psychology departments, and the remaining 6 percent were affiliated with other departments (e.g. aviation, health, design, or kinesiology; Van Cott & Huey, 1992). A more recent review of the Directory of Human Factors Graduate Programs in the United States (Human Factors and Ergonomic Society, n.d.) still shows a roughly even split between engineering and psychology affiliates (43 percent, 40 percent, respectively) but an increase in the number of programs with other affiliations. This increase in specialized programs will likely continue as the discipline matures and greater specialization is required.

A Brief History Of Human Factors

As can be seen by the composition of human factors training programs, human factors is a discipline born of two very different disciplines, engineering and psychology, coming together in order to understand the interface between man and machine. This convergence is a relatively recent occurrence, however. Hugo Munsterberg, widely considered the founder of the modern industrial/ organizational psychology movement, was one of the first to systematically study what is now recognized as the beginnings of modern human factors. In 1913, Munsterberg wrote the classic text titled “Psychology and Industrial Efficiency,” in which he describes the three areas of human factors that remain important today: how to select the best person for the job, how to design the job to fit the human in a way that promotes efficiency, and how to understand human behavior in the marketplace. Frank and Lillian Gilbreth and Frederick Taylor were contemporaries of Munsterberg, and they contributed greatly to the idea that the human was an important part of any given system. They sought to increase the efficiency of the human worker through the careful analysis and restructuring of the job, and promoted the then-novel concept of fitting the job to the human rather than the other way around. The Gilbreths’ study of bricklaying techniques and Taylor’s study of the optimal shovel design in a steel factory demonstrated that careful attention to the human component of the system could yield tremendous gains in the efficiency of the job being done and the comfort of the person performing the work.

Although this work was groundbreaking, human factors as a discipline didn’t gain significant prominence until World War II, when technology was advancing quickly and large numbers of people had to be trained for jobs in which they had no experience. Alphonse Chapanis is widely considered to be the father of modern human factors for his work in the aviation field during the war. Chapanis had noted that pilots of certain aircraft (B-17s and B-25s) would raise the landing gear just as they were landing. These accidents contributed to a growing class of aircraft mishaps that were frequently attributed to “pilot error.” Chapanis noted that pilots of other aircraft, like the C-47, did not seem to experience this problem. Upon further analysis, he discovered that in the B-17 and B-25, the controls for the landing gear and the flaps (which would also be used during landing) were arranged next to each other, and had identical controls. In the C-47, however, the controls were in very different places. This discovery led him to assert (correctly) that many cases of pilot error were, in fact, design errors. Through the careful application of cognitive and perceptual principles, he was able to shape code the controls and nearly eliminate these kinds of accidents (Roscoe, 1997).

After the war, human factors continued to grow, and began to slowly find its way into the private sector, particularly in the fields of aviation and communication. Commercial aircraft were just coming of age; most human factors professionals had come from the military aircraft industry and had seen what the application of psychology could do in the design and assessment of aircraft. The early meetings of these psychologists working in the aircraft industry represented the beginnings of the Human Factors and Ergonomics Society, which was officially founded in 1957. The communications industry contributed greatly to the growth of the profession during this time as well. Bell Labs had engineering psychologists looking at problems ranging from voice quality on the network to the design of the then-new touch-tone phones (Meister, 1999). The importance of human factors in these domains was becoming more widely acknowledged, and human factors students were beginning to be trained at a handful of U.S. universities.

A number of large-scale disasters occurred in the late 1970s and 1980s that highlighted the fact that human factors was still being practiced in relatively few areas. In 1979, the accident at the Three Mile Island nuclear plant in Pennsylvania demonstrated that large, complex systems could overwhelm the cognitive and perceptual capabilities of the human operator, with near-disastrous consequences (United States Nuclear Regulatory Commission, 2004). Eight years later, similar design and system deficiencies led to a catastrophic explosion at the Chernobyl nuclear power plant in Ukraine. As with Three Mile Island, the information presented to the operators of the plant either was insufficient or was presented in ways that overwhelmed the operators and compromised their ability to respond appropriately (US-NRC, 2000). The 1984 Union Carbide disaster in Bhopal, India, in which over 3,800 people lost their lives in a massive toxic chemical explosion (Broughton, 2005) demonstrated again that humans were one of the weakest links in complex systems control. Numerous accidents involving aircraft also occurred during this period, with great loss of life. This is best exemplified by Korean Airlines Flight 007, which was shot down by the Russian Air Force with a loss of 269 lives due to a programming error of the navigation system by the pilot and the subsequent failures of the crew to notice the error (Federal Bureau of Investigation, 1983).

It was during this time that the true value of engineering psychology was becoming evident, not only as an after-the-fact analysis tool but also as a proactive tool that might help identify and prevent these kinds of disasters from occurring in the first place. It was also during this time that another phenomenon was taking place that would push human factors even further into the forefront: the introduction of the personal computer. In 1981 IBM started selling their first personal computer. Although other personal computers were available before that, notably the TRS 80, Apple II, and the Commodore, the release of the IBM PC heralded a huge leap forward in technology for the average consumer. Coupled with personal productivity software that could perform word-processing and spreadsheet functions, personal computers began to change how work (and play) was done. With the advent of the PC came a new subdiscipline of human factors, known as human computer interaction (HCI). HCI practitioners are focused on understanding how to best design both software and hardware to maximize overall system performance. Although the techniques employed may be slightly different from those used prior to the advent of the computer, these professionals are performing work that is similar in its goal to that of the first engineering psychologist—maximizing human performance thorough the optimization of the system. With widespread availability of the personal computer, the networking technology that was developed in the late 1960s with ARPANET became a valuable tool, and Internet/World Wide Web applications multiplied. Remote computing, e-commerce, social networking, and information retrieval have all become topics that HCI psychologists are investigating.

Future human factors endeavors will undoubtedly focus on computer interfaces that are ever more powerful and sophisticated, and that will likely employ computer generated virtual reality or augmented cognition to help the user explore virtual worlds. Not restricted to gaming, these virtual interfaces will extend into military applications, like the Land Warrior System (a networked virtual system for soldiers), and perhaps robot interfaces that will allow remote exploration of oceans and space. In all cases, the goal of the engineering psychologist will be to insure that the human can function safely, effectively, and efficiently with these new complex systems.

Methods Employed By Human Factors

Although human factors professionals often employ psychophysical methods in order to gain information about fundamental human capabilities, human factors engineers and psychologists employ a number of methodologies in the design of systems that are distinct and unique from these classic psychophysical methods. The interested reader can find out more about standard psychophysical methods in Fechner (1860/1966), Green and Swets (1966), or in Chapter 20 in this book.

Human factors methods are focused on determining the needs of the human in relation to the mission requirements or goal of the system, and then determining how to design the system so that limitations in human physical, perceptual, and cognitive capabilities are not exceeded in the successful accomplishment of that goal. Although the accomplishment of a mission or goal may seem to connote large complex military or industrial systems, these terms are commonly used to describe any situation in which a user needs to perform a task to reach a desired outcome. Although this may entail a complex system, the goal might be as simple as using a phone to contact someone. In this case, for example, it would be important to make sure that the coded method of identifying the recipient of the call (the phone number) is designed in such a way as to not exceed the memory capabilities of most users.

Human factors methods are divided into four classes, depending on what information needs to be gathered and where in the design process the methods are being employed. There are specific methods that are used before the design has been started, methods that are used during the design process, assessment methods used on a completed design, and methods that are employed after a design has suffered a critical incident. The methods described below do not constitute an exhaustive list. Rather, these methods represent some of the most common and powerful tools used by practitioners and academicians as they explore, build, design, and test human-machine systems.

Methods Used Before The System Has Been Designed

The methods employed in the predesign process are used to gather information about the problem to be solved and the characteristics of the users of the proposed system. Often, one of the first things that researchers need to determine is which problem they should address first. In systems with many components or operational paths, there needs to be a way to winnow down the problem space and identify the problem (or class of problems) that have the highest impact or are causing the most difficulties for users. The Pareto principle states that 80 percent of the trouble can be accounted for by 20 percent of the problems. By performing a Pareto analysis, researchers can identify and prioritize this 20 percent of the problems for inclusion in the human factors design effort. Once the problems that need to be addressed are identified, it is prudent to perform an in-depth competitive analysis to determine what similar products or systems are doing with respect to these areas. It is an unwise use of resources to reinvent the wheel, and if there are commonalities across competing solutions, then there may be a common solution. Always keep an open mind, however, in case the common solution is actually part of the common problem. In this case, a unique solution may be the appropriate way to solve the problem.

Once the problem space has been identified and potential solutions analyzed, it is important to determine what activities users perform when using the current system. An activity analysis can help evaluate the actions users are taking in the course of completing the tasks and the frequency with which these actions occur. Activity analysis does not account for the time or difficulty of the actions, only the frequency with which they occur. It can provide critical information about tasks that may seem unimportant in primary task completion, yet command significant required action from the user.

Researchers sometimes conduct ethnographic studies to determine how the user interacts with the system in the real world, without the constraints of controlled observation. Originally conceived for use in anthropology, this method entails the observation of the user in the natural environment as he or she uses the system. The method is extremely valuable because it captures specific behaviors that are hard to identify with other methods. These behaviors include unusual uses of the system (off-uses) and interactions with seemingly unrelated systems. For example, the common use for a cell phone is, of course, to make or receive calls. Off-uses that have been captured with ethnographic methods include using the phone as a bottle opener and as a functional (but dim) light source. Ethnographic methods are very adept at capturing these kinds of behavior, but can be time consuming and costly to implement. Because of these limitations, ethnography has been adapted for use by the human factors professional by making the method faster and easier to apply. Researchers call the resulting methods rapid ethnography, involving the use of team observations, directed interventions with the participants, and computer-based data reduction techniques that are different from classic ethnography (Millen, 2000).

Methods Used During The System Design Process

Once the design process has commenced, a number of methods are employed that help the human factors professional choose between competing alternatives and select optimal design solutions. One of the first things that the researcher must decide during the design process is what part of the system will perform various functions. During this function allocation, the designer will determine what tasks are to be performed by the hardware, software, or the human. This is necessary in order to specify the kinds of interfaces that will necessarily follow—will they be control interfaces, monitoring interfaces, or simply information displays? In function allocation, it is important to understand the capabilities of each of the system components and use the information to help make the allocation decisions. For example, a human can’t easily lift 500 pounds (better to allocate to the hardware), a computer can’t easily determine how you feel (best to allocate to the human), and neither the hardware nor the human can easily compute the trajectory of a missile (best to allocate to the software). Fitts (1951) approached the allocation problem by creating lists of items at which men and machines excelled. These MABA-MABA (“men-are-better-at’7 “machines-are-better-at”) lists provided guidance on the allocation of functions. However, significant strides in computing technology have been made since the 1950s (notice that Fitts did not even include a “computers-are-better-at” category) and the line has blurred considerably. Artificial intelligence, adaptive programs, and smart bots have changed the way function allocation must occur. Frequently, designers use dynamic allocation of functions, meaning that the computer has control under some conditions and the human has control under others. The autopilot on an aircraft is a good example of this kind of system; the plane can switch between being controlled by the software or by the human, and this transfer can occur while the aircraft is being flown. Although this sounds like an ideal solution, care must be taken to insure that the human is ready, able, and cognizant of the switch from software to human control.

Once the function allocation has taken place, the human factors engineer can begin the real work of system design. This involves determining what system functions must take place, and the order and time duration of each of those functions. This is accomplished through the application of flow and timeline analysis. Flow analysis details the paths that must be traversed to use the system. The flows that are tracked can be associated with people (physical movement in space, local movement of hands or eyes), information, or materials. Attaching times to these flows adds additional information that designers can use to gauge difficulty or efficiency of the task flow. Flow analysis is an excellent method for uncovering inefficiencies in a system. Once the flow analysis is complete, a link analysis is the next logical method to be employed. A link analysis quantifies the relation between each of the various components identified in the flow analysis. For example, designers commonly use a link analysis to determine the optimal layout for visual displays. Using information about the importance of each element in the display, the amount of time that is spent looking at each element, and the relation of visual gaze between each element (from the link analysis) allows designers to calculate an optimal arrangement of display elements using standard linear programming techniques.

One of the most widely used and important design guidance methods is the task analysis. A good task analysis determines what people will actually do when they perform a specific task on a given system. It helps uncover the kinds of errors users may make when they use the system and how those errors might affect performance. It also helps find conditions where the human becomes overloaded or is asked to perform tasks that exceed typical human capabilities. Task analysis can also provide insight into the time it takes to perform certain tasks on a system, and identify time constraints put on the user by the system. During the first attempt at performing a task analysis, researchers may find that the description of the flow and actions is simply not right, or that the task can’t actually be performed in the way specified. It may also be the case that the actual task doesn’t take place the way the data suggested it would. In these cases it is important to iterate through the task analysis until the flows it describes are both accurate and efficient.

Another, more analytic set of methods for helping to design systems (particularly computing systems) is the GOMS family of methods. GOMS stands for Goals, Operators, Methods, and Selection rules and is a way to quantify the time a user will take to perform a specified task. GOMS is based on the rationality principle: If the limits of the system and the knowledge the user has are known, then designers can determine a user’s behavior by understanding the goal of the user, the tasks the user selects to accomplish that goal, and the elements (or operators) that must be performed in the completion of the task. GOMS uses task element descriptions, such as “point to an object” and “press a key on a keyboard” to describe what a user must do in order to complete the task. There is a specific element description used for simple mental processes, like “finding an icon on the desktop” or “verifying that an action has been taken.” These operators have empirically derived times associated with them that allow them to be combined to give total task time. Several distinct implementations of GOMS are suited for different levels of analysis. KLM-GOMS (Keystroke-Level Model GOMS) is the simplest and can be implemented without the aid of a computer. NGOMSL (Natural GOMS Language), CMN-GOMS (Card, Moran, and Newell GOMS) and CPM GOMS (Critical Lath Method GOMS) are more sophisticated GOMS models that require a computer to execute, but are also significantly more powerful in the kinds of actions they can describe. Designers can use GOMS models to compare different interaction modes to determine which provides the highest efficiency and effectiveness. See John and Kieras (1996) for a more complete description of the GOMS family of methods and how they are implemented.

One last method that is often used to help determine how users categorize items is the card sort. Although the method has found its greatest applicability in the design of navigation schemes for Web sites, it can be extended to any case where the designer must know how users sort, search, or classify items in a system. The method has users sort all the elements of interest (usually on 3 x 5 cards) into groups. In an open sort, the user determines the number of groups and what the names of those groups should be. Once the designer has collected sufficient data from a series of open sorts, a closed sort can be conducted where the categories are predefined and the user’s task is to simply put each of the elements into one of the groups. Statistical analysis can then be conducted to determine the strength or goodness of each of the categories and its attendant elements. An extremely low-tech method in its implementation, it nevertheless provides invaluable information in certain design tasks.

Methods Used on Completed System Design

Assessment of completed systems is an extremely common task for engineering psychologists and human factors engineers. Assessments on completed designs help determine if the system can be used in accordance with the metrics specified in ISO 9241-11 (International Standards Organization, 1998). Can the system be effectively used (low rate of error commission)? Can it be used efficiently (users complete tasks in a reasonable amount of time)? Can it be used with high satisfaction on the part of the user? Using these three metrics, an engineering psychologist can make a determination of whether the system is ready for widespread deployment and use by the intended population. The methods also provide valuable input to the design team about the nature, severity, and number of human factors issues that must be addressed in subsequent versions of the system. These assessment methods can be classified into three distinct categories: methods of inquiry, methods of inspection, and methods of observation.

Methods of inquiry are those in which the user is asked about his or her experience with the system. Contextual inquiry is one of the more common forms in use today. It entails letting participants use the system in their normal environment and collecting verbal data about the users’ experience with regard to the system of interest. The users give a running dialog of their use experience during this time, while the human factors engineer makes inquires about specific behaviors of interest. Like ethnography, contextual inquiry provides more information about how the system fits into the greater sphere of use than do other methods. See Beyer and Holzblatt (1998) for a more detailed description of the method.

Of course, one of the most common forms of inquiry methods is the interview. Participants use a system and the human factors psychologist asks users about their experience. The interview can take place immediately after the system has been used (excellent recall, but little time for reflection) or sometime after the system use has been completed (lower recall but more measured responses). Weiss (1995) provides an excellent summary of interview techniques.

Surveys and self-report are two other inquiry methods that are in widespread use. Surveys are excellent at obtaining quantifiable data from a large number of users with minimum cost. They have the disadvantage of not allowing for follow-up or clarification questions. This means that great care must be taken in the construction of the survey to insure that the responses are valid. Babbitt and Nystrom’s Questionnaire Construction Manual (1989) provides in-depth information about how to build a reliable, valid survey. Self-report is another method for obtaining written information about a system’s use. Commonly employed when the user is not in constant contact with the designer, self-report in the form of diaries or logs can provide valuable information about the system in its normal use. The primary disadvantage of self-report is the decreased participation that tends to occur as the length of the trial progresses.

Workload assessment is another form of inquiry method that designers use to obtain information regarding the cognitive and perceptual capacity of a human using a system. Although there are other noninquiry-based forms of workload assessment (physiological, task performance based, etc.) inquiry-based methods have proved to be among the most reliable (Wierwille & Connor, 1983). Workload assessment techniques help assess the demands that the system is placing on the user, and determine if the user has any reserve capacity remaining for other secondary tasks. The NASA Task Load Index (TLX) is one of the popular workload measurement indices (Hart & Staveland, 1988). The NASA TLX measures workload on six separate dimensions (mental demand, physical demand, temporal demand, performance, effort, and frustration) and weights the user’s assessment of the contribution of each of these dimensions to the demands of the task. The resulting score, on a 100-point scale, makes it easy to compare the workload associated with different systems or different tasks within the same system.

Inspection methods were popularized in the 1990s as cost-effective usability methods, often referred to as “discount usability,” came into favor. As the name implies, inspection methods involve an examination of the system by an expert, who then notes operational sequences and design deficiencies. Heuristic evaluations, described in detail by Nielsen (1994), involve the inspection of the system by an expert who uses a set of known usability principles, or heuristics, in order to make an assessment. These heuristics are general in nature, and allow the expert some latitude in determining what may or may not constitute a problem. An example of a heuristic is “the system provides appropriate feedback.” This heuristic is general enough to allow the expert to determine where feedback is required, and when it is, if the form provided by the system is appropriate. Although the method is extremely cost effective and easy to implement, this flexibility comes at some cost of reliability. Nielsen and Molich (1990) strongly recommend using multiple expert reviewers (at least five) in order to capture the majority of critical problems in the system.

The cognitive walkthrough is a more rigorous form of evaluation. Described in detail by Wharton, Rieman, Lewis, and Polson (1994), the cognitive walkthrough is similar to a heuristic evaluation in the sense that an expert makes an assessment of the system without the benefit of watching a user. However, in a cognitive walkthrough, specific tasks that are high frequency or high importance are identified. The “correct” form of completing these tasks is then detailed, and the expert evaluates deficiencies in the system by using the tasks and the ideal form of completion as a reference frame. For each action a user must take to complete the task in its correct form, the expert determines if the actions required are obvious, if there are potentially confusing options available to the user, and if sufficient feedback is provided to the user at the completion of the step. In this way, a more detailed record of actions and possible deficiencies associated with the task can be developed. Pluralistic Walkthroughs (Bias, 1994) are similar to cognitive walkthroughs, except that pluralistic walkthroughs are more inclusive of the membership that makes up the evaluation team. Unlike a cognitive walkthrough, where only human factors experts are used, a pluralistic walkthrough makes use of the entire project team, including engineers, managers, and programmers. This inclusiveness adds additional perspectives to the output of the analysis.

Observation methods are one of the most powerful and important tools the human factors professional uses in the assessment of completed systems. The methods of observation can be either direct, in which the user is physically observed using the system, or indirect, where user behavior is inferred through the interpretation of some form of telemetric data. Direct observation is frequently referred to as “usability testing.” In this case a user is brought into a controlled setting, like a laboratory, and asked to perform specific tasks with that system. The user’s behavior is observed, and objective metrics concerning the success or failure in the ability of the user to complete the tasks are taken. The method, although relatively expensive and time consuming, provides some of the best objective measures of what system performance is likely to be in the field, provided representative tasks are used in the test. There are a large number of techniques that can be employed in a usability test to maximize the information that can be gleaned from such a study (Rubin, 1994, provides an excellent summary).

Telemetric data is an indirect form of observation that can be a cost-effective way to collect large amounts of data from a diverse population. Web logs are a good example of telemetric data that provide tremendous insight into the use of a Web-based system, provided the limitations of the method are understood. With Web logs, the human factors psychologist can make inferences about where users are going on the Web site, how long they stay there, and what paths they traverse during this process. The results are inferences because of limitations in the data. For example, we assume that we know who is using the system (perhaps by IP address or login information). However, because there is no means of direct observation, the real user might be an office partner or family member. Time spent on a page might be because the user is interested in the page, or because the user has lost interest and is now engaged in some other activity. If the assumptions are carefully managed during the interpretation of the data, telemetry can be an effective and efficient method of collecting observational data.

Methods Used on Systems That Have Suffered a Critical Incident

The human factors psychologist is often involved “after the fact” when an accident has occurred and answers need to be found to determine what might have happened. The accident at Three Mile Island and the resulting investigation are a perfect example of where the human factors professional has played an important role in discovering what might have gone wrong in a complex system failure. The human factors professional draws from a special set of methods when performing this kind of system analysis. Many of these methods can also be conducted before any failure occurs in order to identify and rectify human and system issues that could lead to (catastrophic) failures. failure modes and effects analysis (FMEA) is one method that searches for point failures in a systemic fashion, working through potential failure scenarios for each subsystem. The method specifically accounts for all three components of the system (hardware, software, human). Stamatis (2003) provides an in-depth review of the method and the theory behind it.

Critical incident analysis reviews operational logs or other records to determine when adverse events might have occurred. It is usually hard to get this data observationally because most critical incidents are too infrequent to catch via real-time observation. If an event has already occurred, critical incident analysis may be used to determine if there were other instances of near mishaps that were reported in some other way. The method frequently relies on self-reported data, often in the form of anonymous user reports, so care must be taken to understand the ramifications of possible under- or overreporting of incidents.

Several methods, such as Fault tree analysis, MORT (management oversight and risk tree analysis), and THERP (technique for human error rate prediction), are specifically designed to be conducted before an accident occurs. The aim of these methods is the systematic identification of potential faults in the hardware, software, or human components of the system, with the expressed goal of rectifying the faults before they actually occur. These methods can be quite effective, but in complex systems it is often difficult to predict how complex interactions may lead to specific error conditions.

Summary

Human factors is a relatively young discipline that combines knowledge and expertise in both psychology and engineering. Those who practice human factors are concerned with how a greater understanding of human strengths and weaknesses can be applied to the design of machines and systems. As technology becomes more and more ubiquitous in our daily lives, the importance of having good human factors will become paramount. Complex, interconnected systems with high demands on the cognitive and perceptual abilities of the user will require that human factors psychologists and engineers be involved in the designs to help insure that these new systems are safe, effective, efficient, and satisfying to use.

References:

  1. Babbitt, B. A., & Nystrom, C. O. (1989). Questionnaire construction manual. Fort Hood, TX: U.S. Army Research Institute for the Behavioral and Social Sciences, Research Product 89-20.
  2. Beyer, H., & Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. San Francisco: Morgan-Kaufmann.
  3. Bias, R. G. (1994). The pluralistic usability walkthrough: Coordinated empathies. In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods. New York: John Wiley and Sons.
  4. Broughton, E. (2005). The Bhopal disaster and its aftermath: A review. Environmental Health, 4(6).
  5. Chapanis, A. (1985). Some reflections on progress. Proceedings of the Human Factors Society 29th Annual Meeting (pp 1-8). Santa Monica, CA: Human Factors Society.
  6. Fechner, G. T. (1966). Elements of psychophysics (H. E. Adler, Trans.). New York: Holt, Rinehart & Winston. (Original work published 1806)
  7. Federal Bureau of Investigation. (1983). Korean Airline Flight 007. United States Government, response to a Freedom of Information request. Retrieved April 24, 2007, from http:// foia.fbi.gov/flight/flight1.pdf
  8. Fitts, P. M. (1951). Engineering psychology and equipment design. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1287-1340). New York: Wiley.
  9. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley.
  10. Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index). Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Human mental workload (pp. 239-250). Amsterdam: North Holland Press.
  11. Human Factors and Ergonomic Society Directory of Human Factors/Ergonomics Graduate Programs in the United States and Canada. (n.d.). Retrieved April 24, 2007, from http:// www.hfes.org/Web/Students/grad_programs.html
  12. International Standards Organization. (1998). Ergonomic requirements for office work with visual display terminal (VDT’s)— Part 11: Guidance on usability (ISO 9241-11(E)). Geneva, Switzerland: Author.
  13. John, B. E., & Kieras, D. E. (1996). The GOMS family of user interface analysis techniques: Comparison and contrast. ACM Transactions on Human Computer Interaction, 3(4), 320-351.
  14. Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Erlbaum.
  15. Millen, D. J. (2000). Rapid ethnography: Time deepening strategies for HCI field research. Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques. New York: ACM Press.
  16. Munsterberg, H. (1913). Psychology and industrial efficiency. Boston: Houghton Mifflin.
  17. Nielsen, J. (1994). Heuristic evaluation. In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods (pp. 25-62). New York: John Wiley and Sons.
  18. Nielsen, J., & Molich, R. (1990, April). Heuristic evaluation of user interfaces. Proc. ACM CHI ’90 Conf., 249-256.
  19. Roscoe, S. N. (1997). The adolescence of engineering psychology. In S. M. Casey (Series Ed.), Human factors history monograph series (Vol. 1). Santa Monica, CA: Human Factors and Ergonomics Society.
  20. Rubin, J. (1994). Handbook of usability testing. New York: John Wiley and Sons.
  21. Stamatis, D. H. (2003). Failure mode and effect analysis: FMEA from theory to execution. Milwaukee, WI: ASQ Quality Press.
  22. United States Nuclear Regulatory Commission. (2000). Fact sheet on the accident at the Chernobyl nuclear power plant. Retrieved April 24, 2007, from http://www.nrc.gov/ reading-rm/doc-collections/fact-sheets/fschernobyl.html
  23. United States Nuclear Regulatory Commission. (2004). Fact sheet on the Three Mile Island accident. Retrieved April 24, 2007, from http://www.nrc.gov/reading-rm/doc-collections/ fact-sheets/3mile-isle.html
  24. Van Cott, H. P., & Huey, B. M. (Eds). (1992). Human factors specialists’ education and utilization: Results of a survey. Washington, DC: National Academy Press.
  25. Weiss, R. S. (1995). Learning from strangers: The art and method of qualitative interview studies. New York: Free Press.
  26. Wharton, C., Rieman, J., Lewis, C., & Polson, P. (1994). The cognitive walkthrough method: A practitioner’s guide. In J. Nielsen & R. Mack (Eds.), Usability inspection methods (pp. 105-140). New York: John Wiley and Sons.
  27. Wierwille, W. W., & Connor, S. A. (1983). Evaluation of 20 workload measures using a psychomotor task in a moving-base aircraft simulator. Human Factors, 25(1), 1-16.

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to order a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655