Police Promotional Practices Research Paper

This sample Police Promotional Practices Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of criminal justice research paper topics, and browse research paper examples.

Controversy surrounds promotional process in American police departments. This is true despite the fact that promotional tests are valid and useful tools when constructed consistently with professional and legal guidelines. Candidates and management share some concerns about promotional processes, but candidates’ concerns focus mostly on the degree to which promotional processes are high quality, transparent, consistent, and fair. Management’s concerns focus more on whether the promotional process identifies the best candidates, and minimize costs and grievances. The research on promotional tests is informative in the best practices to construct tests and promotional processes. In an effort to avoid direct and indirect costs, departments often avoid utilizing best practices. When the result of cost savings is an invalid test, the consequences are severe including litigation, low employee morale, and ineffective leadership succession plans, whereas the investment in a well-designed promotional process creates buy-in, promotional opportunities, employee development, and a clear leadership succession plan.

Key Issues/Controversies

Introduction

Promotional processes in American police departments are considered high-stakes tests because their outcome affects candidates, department management, customers (i.e., citizens), and potential other departments within a jurisdiction including human resources, budget, and legal. These tests may include a simple multiple-choice job knowledge test, assessment center, structured interview, performance evaluation, or some combination of these components. Relative to other test types, a multiple-choice test is inexpensive. However, it is a limited tool. Multiplechoice tests do not require candidates to directly demonstrate their knowledge and have often been shown to result in adverse impact against minority candidates (Hunter and Hunter 1984; Rushton and Jensen 2005).

Assessment centers are a combination of simulations such as oral presentations, writing exercises, and emergency incidents that measure candidates’ knowledge, skills, and abilities in different testing formats. The research has shown that assessment centers are valid measures of knowledge, skills, and abilities (Arthur et al. 2003) and are a more direct measure than a job knowledge test.

Interviews and performance evaluations are often used in place of, or to augment, other types of tests. An interview can be structured to be a more direct measure of knowledge, skills, and abilities than a multiple-choice test. Structured interviews have high validity, but they do not measure written communication skills.

Performance evaluations are generally unreliable measures. Unless properly validated, they should not be incorporated into a promotional process, and even then, they are routinely subject to successful legal challenges. The job analysis of the target rank determines what knowledge, skills, and abilities should be measured. Whenever possible, the best promotional practices incorporate multiple assessment tools in order to measure a broad set of important knowledge, skills, and abilities (indirectly and directly) and to reduce subgroup differences whenever possible.

Controversy In Promotional Processes

Controversy surrounds promotional processes in American police departments. The causes of the controversies vary. Typically, these controversies stem from the candidates who take the tests; less frequently, they stem from management.

Candidates who do not get promoted after they take a promotional process often must wait 1–3 years, and in some departments longer, to go through another promotional process. These candidates believe that this waiting period delays their salary increases, their career progression, and, potentially, their pensions.

Those that do get promoted are critical to the success of the department because they must be able to supervise and lead officers who are often assigned to decentralized positions. They must ensure that patrol officers protect the rights of citizens, protect victims, properly handle illegal drugs and money, and perform many other critical tasks. Without effective supervision and leadership, there is unlimited opportunity for intentional and unintentional corruption.

One of the most challenging aspects of using promotional exams in police departments is ensuring that they are valid and produce a diverse group of candidates. This result can be achieved, but it takes a deliberate and thorough plan to succeed. Further, creating a valid test that is defensible if legally challenged is difficult and expensive. Recently, it has become more difficult to defend a challenged test because proving that the test is valid may be insufficient (Johnson v. City of Memphis 2006). Today, it seems that the defense must also demonstrate that they considered all equally valid alternatives and ultimately administered the test with the least adverse impact. However, finding or developing a valid alternative that produces acceptable diversity remains elusive due to:

  1. Testing outcomes (i.e., population subgroups may not perform equally)
  2. Strict laws that govern testing in employment settings

Who Writes The Test?

There are no statistics showing how many police departments outsource their promotional exams compared to those who write them internally. Since the 1970s, there has been insufficient internal departmental staff to validate promotional exams consistent with the Uniform Guidelines on Employee Selection Procedures (EEOC1978), the Principles for the Validation and Use of Personnel Selection Procedures (2003), and case law precedent. There is a tendency for departments who write their exams internally to continue to do so until there is a specific challenge. Those that use external consultants have experienced mixed results, but there seems to be a pattern of better success in defending a test with consultant services than without.

When insufficient resources are devoted to promotional tests, an invalid test may result and the consequences are often severe. A civil rights challenge (Civil Rights Act 1964, as amended in 1991) can stop a promotional process for several years because the litigation is lengthy and expensive. The ripple effects from a suspended promotional process affect the ability to fill vacancies in other ranks, competence in supervision and leadership within the department, and individuals seeking promotion but who cannot compete due to the litigation.

Candidate Protests

Candidates express concerns about promotional processes that contain poor-quality test content and lack transparency, consistency, and fairness.

Test Quality

Researchers have analyzed test bias and test flaws and the manner in which they may contribute to test performance disparities. This line of research presumes that test bias is either partially or entirely the cause of test performance disparities. These biases and flaws include, but are not limited to, test unreliability, invalid cognitive load, and invalid passing or weighting schemes (Hunter and Hunter 1984; Sackett et al. 2001). In general, these theories have not led to the creation of perfect tests as evidenced by continued internal grievances and external litigation.

Furthermore, candidates are not likely to accept a test containing questions with typographical errors, those that ask about minutia, those that are academic in nature, or those that are relevant solely to a subset of the population (e.g., questions that investigators need to know when it is a first-line supervisory test). The degree to which these types of tests negatively affect subgroup populations determines what types of actions candidates take. If, for example, minority candidates are excluded from promotion based upon a test, they might sue their department or jurisdiction. If all candidates believe the test was poorly constructed, they might internally appeal, grieve, or verbally protest. In cases such as these, it is likely management will know about the problems. However, management rarely takes action unless pressure comes from an outside source such as the US Department of Justice, the Equal Employment Opportunity Commission, a lawyer, or the union. The irony is that a successful promotional process, without that unwanted interference, is developmental. It creates a highly ethical environment, and it is an automatic part of a succession plan.

Transparency

There is little research on promotional process transparency. Transparency can be defined as “informing candidates about their promotional process before the process is administered to include, but is not limited to, test dates, times and locations, test components, what will be measured, how it will be administered, how it will be scored, how the results will be published, and how long the results will be used for promotions.” There has been some debate over whether the scoring criteria for a test can be determined after test administration. This debate is contentious because the reasons for assigning passing scores or test component weights after test administration are generally unethical. For example, this system would allow the department to select a specific number of candidates, select specific individuals, or select a specific group of individuals (e.g., minorities, females, nonminorities). Each of these reasons negates transparency. Candidates, in general, prefer that the test process and rules be announced in advance of the test so that any potential for manipulation of the results is eliminated. Unless the leaders of the department have ulterior motives, they too tend to prefer a completely transparent promotional process.

Test score banding falls under candidates’ transparency concern. Banding is a process in which candidates’ test scores are placed into bands or groups. For example, candidates scoring between 90 and 100 are place in one band and candidates scoring between 80 and 89 are placed in another. Banding can be unpopular because candidates often believe that the highest-scoring candidate should be promoted first. This belief is based upon the theory that a test is a perfect predictor of future job success. A valid test is not a perfect measure of performance nor is it complete. It is a predictor; it predicts who is most likely to succeed in the target rank. A separate article could be devoted to banding because it is controversial (e.g., Bobko and Roth 2004). This author has had success in using administrative bands which are used to place candidates’ test scores into predefined (fixed) groups (e.g., rule of 10) and then creating a valid measure to select from the band. This procedure can assist with promoting a diverse workforce, and if the rules are published in advance, it is transparent.

Consistency

It is well known among test developers that a test’s validity is limited by its reliability (Cascio 1998). This means that if a test is unreliable, it cannot measure what it is intended to measure. Three examples of unreliability are (1) inconsistent test administration, (2) unreliable test content, and (3) inconsistent ratings, if ratings are used. When a test is administered inconsistently to candidates, the candidates’ scores are likely to reflect that inconsistent administration. It is impossible to identify whether the inconsistent administration caused those differences in candidates’ test scores or the test content caused those differences. This invalidates the test.

It is also well known that when test content is unreliable, the test cannot be valid. For example, when the wording of a question is indecipherable, it is unreliable. If the responder cannot interpret the question correctly, then the response is unreliable and therefore invalid.

Often, raters score part of a promotional test. If these assessors cannot agree on a score for the candidate’s performance, the test will also be considered unreliable. The rating criteria must assist assessors rate candidates consistently. If the raters cannot do so, then the rating criteria are unreliable, and therefore the test is invalid.

Candidates generally recognize an unreliable test because inconsistent test administration and indecipherable test content are obvious. Unreliable rating criteria are not as obvious to candidates unless they receive feedback about their test performance. Typically, the feedback about their performance will cause candidates to question the quality of the rating criteria.

Fairness

Decades of research have focused on the differences in test outcomes between population subgroups, with particular emphasis on race differences in test performance. Subgroup populations that have been compared and found to differ with regard to test performance include, but are not limited to, African Americans, Asians, Hispanics, and Whites (Cullen et al. 2004; Neisser et al. 1996; Nguyen 2003; Roth et al. 2001; Rushton and Jensen 2005; Schmidt and Hunter 1998). The extent of the gap between African American and White cognitive ability test performance has been shown to be between 0.5 and 1.5 standard deviations (e.g., Hunter and Hunter 1984; Rushton and Jensen 2005), with White test takers scoring higher than African American test takers. Research examining African American-White test performance differences seems to have dominated the research and continues to be a dominant concern.

Some researchers consider this consistent group difference evidence of a difference in intelligence levels between subgroups. This intelligence is also referred to in the research literature as g, shorthand for general mental ability (Gottfredson 1997). In military studies, path models of training and job proficiency have shown that g strongly predicts success in training and acquiring job knowledge, which in turn strongly predicts task proficiency (Ree et al. 1995). Studies throughout the decades, that include both nonmilitary and military samples, have shown that cognitive ability test performance, driven by g, is one of the strongest predictors of job performance and training outcomes (Schmidt and Hunter 1998).

Researchers continually examine whether cognitive ability test performance is a valid predictor of job performance. The research repeatedly finds that cognitive ability tests are valid predictors of training and job performance with some exceptions. For example, Campbell (1996) states that there is no substantive definition of intelligence which becomes problematic as we attempt to measure it. This means, because it is so difficult to define intelligence, it may be difficult to reliably measure it.

Campbell (1996) says that achievement based knowledge may predict individual differences in management performance better than measures of intelligence. Intuitively, this makes sense in that testing a more specific content domain (such as promotional testing) would help determine differences in job performance better than testing a content domain that may be too general (such as cognitive ability). “If test scores represent the relative mastery of a particular domain of response capabilities, then a ‘score’ must be a function of the amount of exposure to the domain, the amount of attention paid to it, the motivational forces that control attention, the amount and kind of practice, and the retention of the specified response capabilities” (Campbell 1996, p. 131).

Regarding fairness, the findings, in general, are that cognitive ability tests do not differentially predict African American and White test performance (e.g., Neisser et al. 1996; Rushton and Jensen 2005; Schmidt and Hunter 1998, p. 272). This has led to the conclusion that tests are not culturally biased. Campbell (1996) wonders if the importance is not whether tests and criteria are culturally biased but whether bias is appropriate. In other words, should observed group differences be accepted as fact?

The concepts of cultural bias and differential prediction are two definitions of fairness, but not everyone agrees. For example, some individuals define fairness as a test that produces equal, or near equal, subgroup outcomes. This philosophical debate occurs during test litigation and in informal settings more so than in the scientific literature. In the scientific literature, the preponderance of the research concludes that cognitive ability is a valid and fair predictor of job performance.

In sum, the scientific evidence concludes that tests, in general, produce different subgroup outcomes even though they do not contain cultural bias nor do they differentially predict subgroup job performance. However, many in American society have not accepted these conclusions. As a result, plaintiffs have brought employment discrimination lawsuits, litigation has ensued, and laws have been enacted.

Why Do Subgroup Differences Exist? Another Explanation

Current researchers who study subgroup differences in test performance are focusing on differences in education and socioeconomic status that occur well before test takers enter the workforce (Campbell 1996; Campbell et al. 2001). “If two groups differ in mean IQ, culture-only theorists conjecture either that the lower scoring group has been exposed to one or more deleterious experiences or been deprived of some beneficial environmental stimuli or that the tests are not valid measures of their true ability” (Rushton and Jensen 2005, p. 240).

What cultural influences might cause group disparities in test performance? There are a host of environmental factors including, but not limited to, the type and quantity of tests individuals were exposed to during primary education and their formative years. For example, one might hypothesize that better school systems utilize greater numbers of tests, greater number of source material for those tests (e.g., textbooks), a greater amount of formal test preparation training, more extensive instruction, and that the tests, textbooks, formal test preparation training, and instruction are of higher quality.

If subgroup test performance is based upon early life cultural influences, then test professionals have a potential method to build systems to compensate for what was not provided in early life. According to Ryan (2001), test perceptions (stemming from environmental influences) include anxiety, self-efficacy, motivation, belief in tests, and perceptions of a specific test’s fairness and job relatedness. Ryan (2001) noted that there has been a recent “focus on how differences between African Americans and whites may be a function of the way they approach the cognitive ability testing situation itself” (Ryan 2001, p. 45). Any examination involving moderators of the race employment test performance relationship implies that if the moderators of that relationship can be controlled, group differences should disappear.

Some common moderators, discussed next, are test-taking skills, test motivation, test anxiety, and stereotype threat.

Test-Taking Skills

Test preparation training assumes that certain population subgroups were improperly or incompletely prepared to take tests during their primary education. Test preparation can influence test-taking skills which incorporate test-wiseness, practice, and preparation. Testwiseness includes the ability to utilize information within the test to assist the individual in identifying the correct answers, independent of the construct being measured (Sarnacki 1979). An example of this would include the ability to eliminate alternatives in order to identify the only correct alternative, or understanding the level of detail that must be displayed in order to increase one’s test score (e.g., in a writing test or in an oral communication test). Test-taking preparation includes organizing, learning, and memorizing the study material. Clause et al. (2001) found that differences in the effectiveness of study methods and learning processes could explain differences in individual test performance. Test-taking practice is repeated exposure to the type of test questions that the individual will respond to in the test. Test-taking practice should positively affect test-wiseness and test perceptions. However, “mere experience in testing does not guarantee future success on tests” (Sarnacki 1979, p. 264).

Hough, et al. (2001) reviewed the research on test preparation courses (test orientation sessions, test coaching), and concluded that test preparation courses have mixed results in improving cognitive ability test performance. The major problems with this line of research are:

  • That, in each case, the test preparation course is not clearly described;
  • Test preparation participation is not mandatory meaning only those who are motivated to do well, or perhaps high performers in any case, attend, and
  • No control groups to compare performance of those who attended test preparation to those who did not.

There is a clear pattern in law enforcement that better test performers attend test preparation classes and seek out test-related feedback more than those that do not perform well. To address this issue, more departments are making test preparation courses a mandatory part of the promotional process. This mandatory requirement has the added benefit of increasing the transparency of the promotional process for all candidates, thus decreasing the chance of protests about “not being aware” of some part of the process. It also permits challenges in advance of the process that can be addressed before a problem multiplies.

Test Motivation

Test motivation research has focused on whether test takers’ motivation influences their test performance. Hough, et al. (2001) summarized research comparing subgroup test perceptions and test motivation. A summary of their research concludes that differences in test attitudes do in fact explain subgroup test score differences. Clause et al. (2001) found that individual test taking motivation and self-efficacy impacted an individual’s meta-cognitive preparation activity. In essence, those who were most motivated and most aware of their own strengths and weaknesses as they relate to test content were most likely to structure their test preparation time to maximize their own test performance.

The most common motivation-test ability interaction stems from Vroom’s (1964) expectancy theory, which indicates that, when motivation is low, both high and low ability individuals demonstrate similar low levels of performance. When motivation is high, however, variability in performance due to individual differences is more apparent (Kanfer and Ackerman 1989). Overall, test-taking motivation can be divided into two general categories: (1) the motivation to succeed on the test based upon the expectation that it will be instrumental in achieving desired outcomes (e.g., selection), and (2) the motivation to exert effort throughout the entire testing program.

Several studies have observed that motivation affects individual performance in several ways (Kanfer and Ackerman 1989). Ployhart and Ehrhart (2002) found that differences in test taking motivation across groups could be reduced through development of more face valid tests. An individual who is unmotivated to perform in the target position may also be unmotivated to extend the effort necessary to study and learn the testing (selection) process and associated materials. Furthermore, test takers who do not believe they can succeed in a testing process may lack the motivation to succeed. Those who have witnessed numerous testing program failures may not be motivated to succeed in a testing program since they believe the test results are likely to be unusable. This lack of motivation will then negatively affect test takers’ performance on the test, or their desire to exert effort from beginning to end of a long testing process.

Sanchez et al. (2000) found a positive relationship between expectancy and actual test performance, indicating the motivational influence of expected outcomes and test performance. Additionally, when compared to a pretest, final test-taking motivation is different among participants from different subgroups even after controlling for race and performance on the pretest (Chan et al. 1997). Test preparation has been shown to increase test motivation indicating that preparation is the mechanism through which motivation affects test performance (Clause et al. 2001).

Test Anxiety

Test anxiety research focuses on a priori test anxiety that individuals bring to a test situation and that which candidates experience while taking their tests. A priori test anxiety can be attributed to fear of the unknown (the test format/content) and fear about test performance. Tobias (1985) examined anxiety that candidates experience while taking the test, specifically the extent to which test anxiety interferes with retrieval of prior learning. More specifically, fill-in the blank tests (such as assessment center exercises), are more of a problem for test-anxious students (Benjamin et al. 1981). When test takers were trained in test-taking skills, their anxiety levels were reduced, resulting in higher performance (Tobias 1985).

Stereotype Threat

Another topic of research in subgroup test performance disparities has been stereotype threat. Stereotype threat has been shown to partially explain minority test performance and cognitive ability. If stereotype threat is low, the test performance of minority applicants improves (McKay et al. 2003). Steele and Aronson (1995) hypothesize that this stereotype threat may interfere with intellectual functioning. Additionally, race differences in cognitive ability tests were significantly lowered (Brown and Day 2006) when stereotype threat was low. It is important to note that stereotype threat is not theorized to be responsible for the gap, but it is believed to widen existing gaps in test performance (Nguyen et al. 2003).

Management Concerns

Management’s concerns about promotional processes include identifying the most qualified candidates, the indirect and direct cost of giving them, and the number of complaints, grievances, appeals, and/or litigation that result. They typically want the most qualified candidates to succeed in the promotional process, and they want a diverse candidate pool from whom to choose. They are frequently concerned with the cost of testing in terms of direct dollars and human capital time. In some cases, as discussed earlier, they will administer lesser quality tests in order to save money and time. If test grievances and litigation abound, the department’s leadership succession plan may be compromised. Grievances and litigation also negatively impact the integrity of the department and employee morale.

Why Use Promotional Processes?

As stated, the research on promotional tests has generally shown that they are valid predictors of job performance (e.g., Schmidt and Hunter 1998). For this very reason, American police departments are large buyers and users of promotional examinations. Other reasons for their reliance on promotional exams include their desire for objectivity and their need to evaluate large candidate pools. Promotional tests, in particular, meet all of these requirements.

Conclusions

It is possible to create a valid promotional process that produces a diverse candidate pool and that withstands candidates’ and managements’ protests. The research suggests that a comprehensive promotional testing program will include at least:

  1. A test developer trained in test validation
  2. A thorough test preparation session that makes the promotional process transparent and provides exposure to the test format and test preparation strategies
  3. A valid test that measures a representative sample of the important knowledge, skills, and abilities (KSAs) needed for the target rank
  4. Multiple methods to measure those KSAs so that all adult test takers can successfully demonstrate their KSAs (e.g., verbally, in writing)
  5. Informing candidates of their scores (while maintaining their right to privacy)
  6. Administrative banding of test scores
  7. Detailed candidate feedback The up-front costs of a well-designed promotional process far outweigh the costs of a poor promotional processes.

Bibliography:

  1. Arthur W, Day EA, McNelly TL, Edens PS (2003) A meta-analysis of the criterion-related validity of assessment center dimensions. Pers Psychol 56(1):125–154
  2. Benjamin M, McKeachie W, Lin Y, Hollinger D (1981) Test anxiety: deficits in information processing. J Educ Psychol 73:816–824
  3. Bobko P, Roth P (2004) Personnel selection with top-score-referenced banding: on the inappropriateness of current procedures. Int J Sel Assess 12:291–298
  4. Brown RP, Day EA (2006) The difference isn’t black and white: stereotype threat and the race gap on Raven’s advance progressive matrices. J Appl Psychol 91(4):979–985
  5. Campbell JP (1996) Group differences and personnel decisions: validity, fairness, and affirmative action. J Vocat Behav 49:122–158
  6. Campbell FA, Pungello EP, Miller-Johnson S, Burchinal M, Ramey CT (2001) The development of cognitive and academic abilities: growth curves from an early childhood educational experiment. Dev Psychol 37(2):231–242
  7. Cascio WF (1998) Applied psychology in human resource management, 5th edn. Prentice Hall, Upper Saddle River
  8. Chan D, Schmitt N, DeShon RP (1997) Reactions to cognitive ability tests: the relationships between race, test performance, face validity perceptions, and testtaking motivation. J Appl Psychol 82(2):300–310
  9. Civil Rights Act of 1964, 42 U.S.C. }1971 et seq. (1964), as amended in 1991 Pub. L. 102–166, (1991)
  10. Clause CS, Delbridge K, Schmitt N, Chan D, Jennings D (2001) Test preparation activities and employment test performance. Hum Perform 14(2): 149–167
  11. Cullen MJ, Hardison CM, Sackett PR (2004) Using SAT-Grade and abilityjob performance relationships to test predictions derived from stereotype threat theory. J Appl Psychol 89(2):220–230
  12. Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, Department of Justice (1978) Adoption of Four Agencies of Uniform Guidelines on Employee Selection Procedures, 43 Federal Register 38, 290–38, 315 (August 25, 1978)
  13. Gottfredson LS (1997) Why g matters: the complexity of everyday life. Intelligence 24(1):79–132
  14. Hough LM, Oswald FL, Ployhart RE (2001) Determinants, detection and amelioration of adverse impact in personnel selection procedures: issues, evidence and lessons learned. Int J Sel Assess 9(1–2):152–194
  15. Hunter JE, Hunter RF (1984) Validity and utility of alter- native predictors of job performance. Psychol Bull 96(1):72–98
  16. Johnson v. City of Memphis, Case No. 00–2608, 04–2017 and Billingsley v. City of Memphis, Case No. 04–2013 (2006)
  17. Kanfer R, Ackerman PL (1989) Motivation and cognitive abilities: an integrative/aptitude-treatment interaction approach to skill acquisition. J Appl Psychol 74(4):657–690
  18. McKay PF, Doverspike D, Bowen-Hilton D, McKay QD (2003) The effects of demographic variables and stereotype threat on black/white differences in cognitive ability test performance. J Bus Psychol 18(1):1–14
  19. Neisser U, Boodoo G, Bouchard TJ, Boykin AW, Brody N, Ceci SJ, Halpern DF, Loehlin JC, Perloff R, Sternberg RJ, Urbina S (1996) Intelligence: knowns and unknowns. Am Psychol 51(2):77–101
  20. Nguyen HD, O’Neal A, Ryan AM (2003) Relating testtaking attitudes and skills and stereotype threat effects the racial gap in cognitive ability test performance. Hum Perform 16(3):261–293
  21. Ployhart RE, Ehrhart MG (2002) Modeling the practical effects of applicant reactions: subgroup differences in test-taking motivation, test performance, and selection rates. Int J Sel Assess 10(4):258–270
  22. Ree MJ, Carretta TR, Teachout MS (1995) Role of ability and prior job knowledge in complex training performance. J Appl Psychol 80(6):721–730
  23. Roth PL, Bevier CA, Bobko P (2001) Ethnic group differences in cognitive ability in employment and education settings: a meta-analysis. Pers Psychol 54(2):297–330
  24. Rushton JP, Jensen AR (2005) Thirty years of research on race differences in cognitive ability. Psychol Publ Policy Law 11(2):235–294
  25. Ryan AM (2001) Explaining the black/white test score gap: the role of test perceptions. Hum Perform 14:45–75
  26. Sackett PR, Schmitt N, Ellingson JE, Kabin MB (2001) High-stakes testing in employment, credentialing, and higher education: prospects in a post-affirmative-action world. Am Psychol 56(4):302–318
  27. Sanchez RJ, Truxillo DM, Bauer TN (2000) Development and examination of an expectancy-based measure of test-taking motivation. J Appl Psychol 85:739–750
  28. Sarnacki RE (1979) An examination of test-wiseness in the cognitive test domain. Rev Educ Res 49(2):252–279
  29. Schmidt FL, Hunter JE (1998) The validity and utility of selection methods in personnel psychology: practical and theoretical implications of 85 years of research findings. Psychol Bull 124(2):262–274
  30. Society for Industrial and Organizational Psychology Inc (2003) Principles for the validation and use of personnel selection procedures, 3rd edn. Author, College Park
  31. Steele CM, Aronson J (1995) Stereotype threat and the intellectual test performance of African-Americans. J Pers Soc Psychol 69:797–811
  32. Tobias S (1985) Test anxiety: interference, defective skills, and cognitive capacity. Educ Psychol 20(3):135–142
  33. Vroom VH (1964) Work and motivation. Wiley, New York

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655