Place-Based Randomized Trials Research Paper

This sample Place-Based Randomized Trials Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of criminal justice research paper topics, and browse research paper examples.

Place-randomized trials are an important vehicle for generating evidence about “what works” in criminology. Place-based randomized trials are a form of cluster randomization that involves identifying a sample of places (for instance, crime hot spots) and randomly allocating these locations to different police or community interventions. Random allocation assures a fair comparison among the interventions, and when the analysis is correct, a legitimate statistical statement of confidence in the resulting estimates of their effectiveness can be made.

This research paper provides basic definitions and practical counsel about the use of such trials in generating evidence in crime prevention. It also identifies issues, ideas, and challenges that might be addressed by future research. Finally, it discusses the special analytic difficulties that may occur in the development of place-randomized trials as opposed to more traditional trials where individuals are the units of allocation and analysis.

Places That Are Randomized: Theory And Units Of Randomization

Place-randomized trials depend on a clear understanding of the role of place. A place can be an entity in itself; for instance, a business establishment that has a legal status separates from that of owners or employees. Or a place can be an organizational convenience for smaller units within it. One may, for instance, become interested in the effect of neighborhood context on the likelihood of individual victimization. Inasmuch as individual habits can also contribute to risk of victimization, attention may also be directed toward both the attributes of neighborhoods as well as to the individuals who live within the neighborhood.

Whether the place is an entity per se or a receptacle for lower-level observational units can dramatically affect how the randomized trial is designed and analyzed. The key definitional point in a “place-randomized trial,” however, is that the random assignment occurs at the place level. The implication is that the methodological benefits of random assignment are realized across places, not the units within them. Statistical analysis at the place level therefore conforms to well-understood and accepted statistical practices. Analysis attempted at the level of the nonrandomized units which are nested within places can be complex and controversial.

The “units of randomization” in a placerandomized trial may vary considerably. Weisburd and Green’s (1995) study, for instance, operationalized “drug hot spots” as street segments rather than institutions such as housing developments, schools, or business units. The broad theory underlying the trial posited that focusing police and other resources on these hot spots would reduce crime, rather than leading to no effect or a migration of criminal activity into other areas. An earlier crime hot spot trial by Sherman and Weisburd (1995) in Minneapolis similarly defined the unit of analysis as a single street segment from intersection to intersection.

Other places have been targeted for different types of interventions, such as saloons in the context of preventing violence and glasswarerelated injuries (Warburton and Sheppard 2000). Private properties, including apartment houses and businesses, have been targeted in a study of the effects of civil remedies and drug control (Mazerolle et al. 2000). Convenience stores, crack houses, and other entities have also been randomly allocated to different interventions.

These examples invite a number of basic questions. How can such trials be deployed well? Can better theory be developed about what should happen as a consequence of an intervention at high levels of units: province or county, city or village, institution or housing development, or crime hot spot? Can theories be developed to guide thinking about change or rate of change at the primary aggregate level – the places – and below it? What new statistical problems emerge from randomization of places? These examples also invite questions about how to learn about other trials of this sort, involving yet other units of allocation and analysis.

Relationships And Agreements

People get place-randomized trials off the ground through agreements between the trialist’s team and prospective partners in the place-based trial. Here, “partners” mean individuals or groups whose cooperation and experience are essential in deploying both the intervention and the trial. Weisburd (2005) emphasizes the need to develop personal relationships that lead to trust and willingness to experiment on innovations that might work better than conventional practice. In his Jersey City experiment, for instance, the strong involvement of the senior police commander as principal investigator in the study played a critical role in preventing a breakdown of the experiment after 9 months.

In Jersey City, the Deputy Chief who administered the interventions was strongly convinced of the failures of traditional approaches and the need to test new ones. The commander took personal authority over the narcotic unit and used his command powers to carefully monitor the daily activities of detectives in the trial. This style of work suggests the importance of integrating “clinical” and research work in criminal justice, much as they are integrated in medical experiments. It also reinforces the importance of practitioner “belief” in the necessity of implementing a randomized study. The Kingswood experiment described by Clarke and Cornish (1972) illustrates how doubts regarding the application of experimental treatment led practitioners to undermine the implementation of the study.

Place-randomized trials do not only involve the practitioners and researchers who are directly involved with implementation and analysis. For instance, in policing, federal funding agencies, such as the National Institute of Justice and the Bureau of Justice Assistance; accreditation agencies; and professional organizations such as the International Association of Chiefs of Police, the Police Executive Research Forum, or the Major Cities Chiefs Police Association can all influence the willingness of individual police departments to adopt place-based practices and implement place-based trials. The capacity of researchers and practitioners to carry out place-randomized trials is therefore often influenced by the preferences and priorities of these institutional stakeholders.

Developing relationships in place-randomized trials as in many other kinds of field research depends on reputation and trust, of course. The topic invites attention to questions for the future. How can better contracts and agreements in networks of organizations, public and private, ones that permit us to generate better evidence about the effects of an innovation, be developed? How can “model” contracts and memorandums of understanding be developed and made available to other trialists and their potential collaborators?

Justifications For A Place-Randomized Trial

For many social scientists, an important condition for mounting a randomized trial on any intervention that is purported to work is that (a) the effectiveness of a conventional practice, policy, or program is debatable and (b) the debates can be informed by better scientific evidence. In the crime sector, police of course are local theorists, and they often disagree about what could work better. Crime experts have also disagreed about what approaches might be effective in high-crime areas. More generally, of course, people disagree with one another about what might work in the policy sector, and there is, at times, some agreement that better evidence would be helpful.

For instance, Weisburd (2003) points out that one of the major justifications for random trials is disagreement among experts about the effectiveness of an intervention. This is an important factor in justifying a randomized trial using individuals or places as the units of random allocation. For instance, the Cambridge-Somerville Youth Study, one of the most famous experiments in youth crime prevention, illustrated that even well-planned and implemented intervention programs may have no effect on offending behavior or even have a “backfire” effect (McCord 2003). Experimental studies of the effectiveness of random beat patrol and foot patrol likewise provided surprising results regarding the effectiveness of widely accepted standard practices in criminal justice in reducing crime (Kelling et al. 1974). In these cases, experimental research concerning the effectiveness of commonly accepted practices spurred criminologists and criminal justice agencies to revise and improve intervention strategies.

The scientific justification for placerandomized trials is the assurance that if the trial is carried out properly, there are no systematic differences between groups of places randomized, which in turn carries a guarantee of statistically unbiased estimates the intervention’s effect. It also assures that chance and chance imbalances can be taken into account and that a legitimate statistical statement of one’s confidence in the results can be made. Further, Weisburd (2005) and others have pointed out that the simplicity and transparency of the idea of fair comparison through a randomized trial has a strong appeal for policy makers and practitioners who cannot understand and do not trust complex model-based analyses of data from nonrandomized studies. For the abiding statistician, the crucial aspect of simplicity is that the statistical inferences as to the effect’s size relative to chance need not depend on econometric, statistical, or mathematical models. The randomization feature permits and invites less dependence on such speculation. And modern methods permit the use of randomization tests.

The empirical evidence on the vulnerability of evidence from nonrandomized trials in comparison to the evidence from randomized trials has been building since at least the late 1940s. Assuring that one does not depend on weak and easily assailable evidence when stronger evidence can be produced is an incentive at times in parts of the policy community. Assuring that one does not needlessly depend on heroic assumptions to produce good estimates of effect, assumptions often required in the nonrandomized trials, is an incentive for the scientific and statistical community. Boruch (2007) summarizes these concepts in the field of health care, employment and training, and economics.

Shadish and colleagues (2008) provided a persuasive illustration that is especially compelling because comparisons between a randomized experiment and an observational study were anticipated as part of their research design and before the data were collected. Their results suggest comparability, as opposed to major differences, if the quasi-experiment is designed well in a particular domain. Theirs is an interesting and potentially important specific case. More generally, the biases in estimating an intervention’s effect based on the quasi-experiments can be very large, small, or nonexistent (Weisburd et al. 2001). The variance in estimates of effects appears to be typically larger in the quasi-experiments than in the randomized tests. So far, and with some narrow exceptions, there is no way to predict the directionality or magnitude of such biases or in the variances of the estimates of the intervention’s effect based on a nonrandomized trial.

It remains to be seen whether similar methodological studies on aggregate level analyses using randomized versus nonrandomized places, clusters of individuals, or groups yield similar results, that is, uncover serious biases in estimating effects or the variance of the estimates or both. But it is reasonable to expect biases here also. Bertrand, Duflo, and Mullainathan (2002), for instance, focused on biases in estimates of the standard error of effects assuming no effect at all using conventional difference in differences methods and found type I error rates that were nine times the error rate presumed (0.05) in using conventional statistical tests. This was partly on account of serial correlation. More methodological research, however, needs to be done on the quasi-experimental approaches to aggregate level units so as to understand when the biases in estimates of effect appear, when the biases in estimates of their standard errors appear, and how large the mean square error is, relative to place-randomized trials.

It is important to identify dependable scenarios in which bias and variance of estimates generated in nonrandomized trials are tolerable. Doing so can reduce the need for randomized trials (Boruch 2007). It is also not easy, as yet, to identify particular scenarios in which bias in estimates of effect or variance will be small, as was the case in some work by Shadish et al. (2008). Berk’s (2005) more general handling of the strengths and weaknesses of randomized controlled trials at the individual level is pertinent to the analysis of data generated in place-randomized trials also.

The scientific justifications that are identified here are important in the near term. In the long term, it would be good to understand what other incentives are and to make these explicit at different levels, for example, policy, institution (agencies), and individual service-provider levels. Incentives for better evidence may differ depending on whether the stakeholders are members of the police force at different levels, the mayor’s office, and the community organizations that have a voice, and so on. Many police executives, for instance, want to improve policing and produce evidence on whether things do improve and also usually want to keep their jobs. The two incentives may not always be compatible if a city council or mayoral preferences are antagonistic toward defensible evidence generated in well-run field tests. Sturdy indifference to dependable evidence of any kind is, of course, a problem in some policy sectors.

Deploying The Intervention: Implementation, Dimensionalization, And Measurement

Justifications and incentives are essential for assuring that places, and influential people within them, are willing to participate in a randomized trial. Understanding how to deploy a new program requires expertise at a ground level, as well as substantial cooperation between researchers and mid-and upper-level management. It also requires constant communication between researchers and street-level practitioners to ensure that their daily practices conform to the treatment that they intend to occur.

The Drug Market Analysis Program, which fostered a series of randomized experiments on crime hot spots, suggests that “ordinary” criminal justice agencies can be brought on board to participate in experimental study if there is strong governmental encouragement and financial support that rewards participation (see Weisburd et al. 2006). A similar experience in the Spouse Assault Replication Program (SARP) reinforces these observations. Joel Garner (2002), who served as program manager for SARP, noted that he knew that the program was a success the “day that we got 17 proposals with something like 21 police agencies willing to randomly assign offenders to be arrested” (Weisburd 2005, p. 232).

Day-to-day implementation of place randomized trials also requires significant forethought on the part of both the researchers and middle managers who are involved with the experiments. For instance, during Lum and colleagues’ (2010) trial of the effectiveness of license plate recognition systems, the research team made an effort to directly involve both the officers who would be implementing the intervention and the middle managers who would be responsible for directly supervising the officers in the process of identifying hot spots, developing mechanisms for reporting officer patrol, and overseeing implementation. Given that the intervention involved the participation of two separate police departments, it was also important to understand and account for different practices in supervising officers, in the responsibilities of officers to continue to respond to calls for service, and other functions. Gaining an understanding of the experience and obligations of these practitioners allowed the researchers to revise plans for patrol deployment and ensure that the experiment could realistically be implemented given the normal practices of the two police departments.

The reports on deploying programs and the randomized trials that are published in peerreviewed journals can be excellent, but they are typically brief. The brevity invites broad questions about how the authors’ experience in detail can be shared with others, that is, through web-based journals, reports without page limits, workshops, and so on. It invites more scientific attention to the question of how one can dimensionalize implementation and the engineering questions of how to measure or observe implementation level inexpensively and how to establish a high threshold condition for implementation.

Resources For The Trial’s Design, Statistical Problems, And Solutions

Some of the important technical references for trial design and model-based data analysis include Hayes and Moulton (2009) on cluster-randomized trials in different countries, mainly in the health arena. Murray’s (1998) book on group-randomized trials focuses more on individuals within groups and considers diverse applications in the USA. Bryk and Raudenbush’s (2002) text considers in detail model-based statistical analyses of multilevel data that may or may not have been generated in randomized trials. The models are complex and entail assumptions that the analyst may not find acceptable.

In the context of place-randomized, cluster-randomized, or group-randomized trials, there are few publications covering the simplest and least model-dependent approaches to analyzing data from such a trial. Such approaches fall under the rubric of randomization tests or permutation tests. A simple randomization test, for instance, involves computing all possible outcomes of the trial, ignoring the actual allocation to intervention or control conditions, and then making a probabilistic judgment about the dependability of the effect detected based on the distribution of possible outcomes so generated. There is no dependence on linear or other parametric models.

Given the relatively small sample sizes involved in place-randomized trials (typically less than 200 are allocated to one or another intervention) and given contemporary computing capacity, generating the distribution of possible outcomes in these approaches is relatively straightforward. The basic concepts were described by Sir Ronald Fisher in the 1950s for trials in which individuals or plants were the independent units of random allocation and analysis. The idea is directly relevant to trials in which places are randomly allocated and inferences are made about the effects of intervention in places.

The Campbell Collaboration organized conferences on place-randomized trials in 2001 and 2002 to update researchers on the design and conduct of place-randomized trials. A special issue of the Annals of the American Academy of Political and Social Sciences (Vol. 599, 2005) resulted from these. After 2002, the William T. Grant Foundation funded workshops and software development to enhance the technical capacity of researchers to design such trials and analyze results. At the time of this writing, there was no information on more specialized efforts sponsored by the National Institute of Justice or private foundations in criminology.

One of the major scientific challenges in designing a place-randomized trial is assuring that the size of the sample of places is large enough to detect the relative effects of interventions. A statistical power analysis is essential to this planning process and allows the researcher to determine the number of places that are needed given assumptions about the expected effect size, randomization levels, and specific statistical tests of hypotheses and related procedures. Userfriendly software for estimating the modelbased statistical power of a trial under various assumptions about sample size and other factors is available on the William T. Grant Foundation’s website (http://www.wtgrantfdn.org).

There are a number of common mistakes made during analysis of place-randomized trials. First, researchers have often wrongly employed the number of people in places for hypothesis testing and power analysis rather than employing the number of places randomly allocated (see, e.g., Kelling et al. 1974). Second, simple linear regression models are often used to analyze data even though many of the modeling assumptions are not necessarily true, as the “uncertainty” in randomized trials stems from the assignment process, rather than unobservable disturbances that are assumed to be the source of variation in regression models (see Freedman 2008). It is very difficult to determine how misleading the results of such regression analyses are likely to be in multilevel contexts. Analyses based on regression models with categorical, count, or time-to-failure outcomes create additional problems (Freedman 2008).

Place-based trials often involve the study of people within places, and thus, a very small number of independent units (places) may be present at the randomized level and a large number of related units (people) may be nested within each place. A key issue in this scenario is that individuals or other units within a place were not randomly assigned to the intervention or control conditions. Randomization at the place level ensures that preexisting differences between treatment and control groups in the study are not systematic; this is due to the matching and subsequent randomization process. However, since all individuals within each place are automatically assigned the same condition, intervention, or control, the analysis is riskier to carry out at the individual level (Raudenbush and Bryk 2002; Spybrook et al. 2006).

Prior techniques for clustered data involved either aggregating all information to the group level as in a randomization test or a “t-test” of mean differences between place level outcomes or have involved disaggregating all group-level traits and confining attention to all the individuals within the intervention and control conditions. For Raudenbush and Bryk (2002), the problem with the first method is that all within group information is wasted; it is omitted from the analysis. The problem with the second method is that the observations are no longer independent, as all individuals within a certain intervention or control condition will have the same value on a certain variable and will not be independent from one another on account of the intra-class correlation among people within place. To account for the dependence that arises when using samples of individuals nested within places, researchers can use several statistical methods to correct for clustering, such as hierarchical linear modeling techniques (i.e., HLM) and generalized least squares estimation techniques with robust standard errors.

There are several options for analyzing the results of place-based randomized experiments in a largely model-free manner. Small and colleagues (2008), for instance, emphasize randomization/permutation tests with different kinds of adjustments for covariates, not relying on any form of HLM. Imai, Kind, and Nall (2009) advanced the state of the art by showing how matched pair designs in place-randomized trials can often enhance precision in estimates of the effects of interventions, increase statistical power of analysis in detecting effects, and better assure unbiased estimates of variability of the effect with a small number of places. Matched pair designs can also be used in this context, but the concern is that matching may unnecessarily reduce degrees of freedom and statistical power of the studies.

As a practical matter, the work by Imai and his colleagues (2009) shows that the number of units within a place are an important matching variable that can reduce costs and increase statistical power in a trial. Bloom and Riccio (2005) advance the state of the art in another important respect by coupling conventional comparison of randomized places in a time series analysis of data from places. While the coupling of randomized and nonrandomized approaches to estimating effects of interventions is not a new concept (see, e.g., Boruch 1985), incorporating these ideas thoroughly into design and analysis is unusual.

Another novel contribution relating to designing place-randomized trials lays in the idea of “step wedge” and “dynamic wait list” designs. The wait list design assigns a random half sample of eligible places to the intervention for some period of time. The remaining half sample receives the intervention after this period has ended, under the assumption that all units will eventually receive the intervention. While this approach satisfies ethical concerns that may arise if a beneficial treatment is not administered to certain places, it may be problematic if there are treatment consequences for a delay in treatment. To address such concerns about the sequence of treatment, places may also be divided into subsets which are periodically given treatments of a given intensity at a given time period. In a related area of work, Hussey and Hughes (2006) develop the idea of step wedged designs, with the “steps” being a point at which a subset of places is randomly assigned a treatment.

The step wedge and dynamic wait list designs are innovative and have attractive features. However, the power and design analysis of these approaches have relied on hierarchical linear models with unusual and sometimes untestable assumptions. There may be alternatives that are less model dependent.

Registers Of Place-Randomized Trials

Learning about place-randomized trials can be difficult. Reports may be published in a variety of scientific journals and may appear in unpublished (gray) literature. Relying on webbased searches may not be as effective as hand searches in terms of locating randomized trials in social sciences. Taljaard et al. (2009), for instance, found that fewer than 50 % of published place-randomized trials in health are appropriately classified in titles and abstracts.

There are a number of data sources concerning randomized trials that can help to reduce these difficulties. For instance, the Cochrane and Campbell Collaborations, which are concerned with the fields of health care and social sciences, respectively (including psychology, education, criminal justice, and sociology), publish reviews of results of randomized trials in which references to original sources can be accessed through their websites. In the United States, one can learn about trials in the health sector, including placerandomized trials, at http://www.clinicaltrials. gov. Often the interventions tested or the outcome variables examined in these trials are pertinent to criminologists as well as health-care researchers and practitioners. Finally, in 2007 David Greenberg and Mark Schroder developed a new register of randomized trials oriented toward tests of economic interventions under the auspices of the Social Sciences Research Network (SSRN).

Standards for reporting randomized trials where individuals are the unit of allocation were the product of the late 1990s. The Consolidated Standards of Reporting Trials (CONSORT) statement, which provided guidance on the composition of a report on randomized trial in health care, has been modified to provide guidance on reporting place-randomized trials. For instance, studies should provide information on the rationale for a randomized design and measure outcomes on all levels of sampling and statistical inference. As of this writing, there are no uniform standards for reporting randomized trials in criminal justice. These problems have been raised by scholars of criminology who have emphasized that “reporting validity” is critical for the development of experimental science in criminology (Perry et al. 2009; Gill 2009; Farrington 2003). A lack of clear and uniform standards in this area of research means that reanalysis of experimental studies is difficult and often impossible. While some funding agencies have tried to make policy to assure that independent analysts have access to record data in research that they sponsor, clear criteria such as that included in the CONSORT statement are critical for advancing experimental criminology.

Ethics And The Law

As of this writing, no professional society or government agency has promulgated explicit statements about the ethical propriety of place randomized trials. Applying contemporary standards to place-randomized trials can be awkward and imperfect (see, e.g., Boruch 2007), and at times, standards of ethics and government regulations are not clearly relevant to place randomized trials (see, e.g., Taljaard et al. 2009, for a discussion of these issues in health care). For instance, ethical standards established for the treatment of human subjects are arguably inappropriate for randomly allocated treatment in a city’s crime hot spots. The ethical issues surrounding place-based trials in criminology are complex. For instance, researchers may confront concerns about equality of treatment, as in research surrounding hot spot policing, wherein residents were concerned that concentrated patrol in one area would result in reduced services in other locations (see Weisburd 2005). Or officers could feel that sitting for extended periods of time reduces their capacity to protect the public (Weisburd et al. 2006). Finally, residents of areas that are designated as crime hot spots may feel as if they are being unfairly targeted by the police. In these cases, the implementation of a place-randomized trial can undermine the relationship between the practitioners involved in the trial and the clients whom they serve (see also Clarke and Cornish 1972).

Regardless of the ethical dilemmas engendered by place-based trials over the last decade, there appear to have been no serious challenges in the US courts to the conduct of place-randomized trials. For instance, the former director of the Institute for Education Sciences (IES), Russ Whitehurst, reported in a personal communication (2008) that he had encountered no court challenges as a consequence of the IES’s sponsoring many such trials in education during 2002–2008. Similarly, we are aware of no judicial challenges in the context of place randomized trials in the crime sector, such as the trials on crime hot spots, bar room violence, convenience store vulnerability to holdups, and so on.

Conclusions And Implications For The Future

Place-randomized trials have become important not only in criminology but also in a broad range of other disciplines because they employ substantive theory about the effects of intervention at the place level. The experience of people who have been involved in the design, execution, and analysis of place-randomized trials is an important source of intellectual and social capital. This experience must be exploited by future researchers and incorporated into graduate and postgraduate education, as should the knowledge of researchers in disciplines such as medicine and education who have substantially advanced the state of knowledge about more generally clusterrandomized trials. The statistical armamentarium for design and analysis of place-randomized trials is fundamental and lies on the simple idea of randomization rather than complex statistical models. Future development of placerandomized trials should better explore the implications of randomization of different interventions when the expected effects of the interventions are in “equipoise” and advance investigations into the ethics of this type of research. Further, standardized reporting of design, execution, and analysis of placerandomized trials should be advanced in criminology to allow for reanalysis and secondary analysis, particularly of controversial studies.

Walter Lippmann, an able social scientist and newspaper writer, had a strong interest in cops and crimes by adults and adolescents and was familiar with political ambivalence about or opposition to sound evidence. He was a street level criminologist, remarkable writer, and good thinker. In the 1940s, Lippmann said: “The problem is one for which public remedies are most likely to be found by choosing the most obvious issues and tackling them experimentally.. .the commissions of study are more likely to be productive if they can study the effects of practical experimentation.” Nowadays, trialists in criminology would have little difficulty in subscribing to Lippmann’s counsel.

Bibliography:

  1. Berk R (2005) Randomized experiments as the bronze standard. J Exp Criminol 1(4):417–433
  2. Bertrand M, Duflo E, Mullainathan S (2002) How much should we trust difference in differences estimates? Working paper 8841. National Bureau of Economic Research, Cambridge
  3. Bloom HS, Riccio JA (2005) Using place random assignment and comparative interrupted time-series analysis to evaluate the jobs-plus employment program for public housing residents. Ann Am Acad Polit Soc Sci 599:19–51
  4. Boruch RF (2007) Encouraging the flight of error: ethical standards, evidence standards, and randomized trials. N Dir Eval 133:55–73
  5. Boruch RF, Wothke W (1985) Seven kinds of randomization plans for designing field experiments. N Dir Prog Eval 28:95–113
  6. Bryk AS, Raudenbush SW (2002) Hierarchical linear models. Sage, Thousand Oaks
  7. Clarke RVG, Cornish DB (1972) Controlled trial in institutional research – paradigm or pitfall for penal evaluators. The Home Office, London
  8. Farrington DP (2003) Methodological standards for evaluation research. Ann Am Acad Polit Soc Sci 587(1):49–68
  9. Freedman DA (2008) Randomization does not justify logistic regression. Stat Sci 23:237–249
  10. Gill C (2009) Reporting in criminological journals. Report for seminar on advanced topics in experimental design. Graduate School of Education and Criminology Department, University of Pennsylvania, Philadelphia
  11. Hussey MA, Hughes JP (2006) Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials 28(2):182–191
  12. Imai K, King G, Nall C (2009) The essential role of pair matching in cluster randomized experiments, with application to the Mexican universal health insurance evaluation. Stat Sci 24:29
  13. Kelling GL, Pate T, Dieckman D, Brown CE (1974) Kansas City preventative patrol experiment: a technical report. Police Foundation, Washington, DC
  14. Lippmann W (1963) The young criminals. In: Rossiter C, Lare J (eds) The essential Lippmann. Random House, New York, Originally published 1933
  15. Lum C, Merola L, Willis J, Cave B (2010) License plate recognition technology (LPR): impact evaluation and community assessment. Accessed from http://gemini. gmu.edu/cebcp/lpr_final.pdf
  16. Mazerolle L, Price J, Roehl J (2000) Civil remedies and drug control: a randomized trial in Oakland, California. Eval Rev 24(2):212–241
  17. McCord J (2003) Cures that harm: unanticipated out-comes of crime prevention programs. Ann Am Acad Polit Soc Sci 587(1):16–30
  18. Murray D (1998) Design and analysis of group randomized trials. Oxford University Press, New York
  19. Perry AE, Weisburd D, Hewitt C (2009) Are criminologists reporting experiments in ways that allow us to access them? Unpublished manuscript/report
  20. Raudenbush SW, Bryk AS (2002) Hierarchical linear models: applications and data analysis methods. Sage, Thousand Oaks
  21. Shadish WR, Clark MH, Steiner PM (2008) Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. J Am Stat Assoc 103(484):1334–1356
  22. Sherman LW, Weisburd DL (1995) General deterrent effects of police patrol in crime “hot spots”: a randomized, controlled trial. Justice Q 12(4): 625–648
  23. Spybrook J, Raudenbush SW, Liu X, Congdon R (2006) Optimal design for longitudinal and multilevel research: documentation for the “Optimal Design” software. Accessed from http://rmcs.buu.ac.th/ statcenter/HLM.pdf
  24. Taljaard M, Weijer C, Grimshaw J, Bell Brown J, Binik A, Boruch R, Brejhaut J, Chaudry S, Eccles M, McRae A, Saginur R, Zwarenstein M, Donner A (2009) Cluster randomized trials: rational design of a mixed methods research study
  25. Warburton AL, Sheppard JP (2000) Effectiveness of toughened glass in terms of reducing injury in bars: a randomized controlled trial. Injury Prev 6:36–40
  26. Weisburd D (2003) Ethical practice and evaluation of interventions in crime and justice: the moral imperative for randomized trials. Eval Rev 27(3): 336–354
  27. Weisburd D (2005) Hot spots policing experiments and lessons from the field. Ann Am Acad Polit Soc Sci 599:220–245
  28. Weisburd DL, Green L (1995) Policing drug hot spots: the Jersey City drug market analysis experiment. Justice Q 12(4):711–735
  29. Weisburd D, Lum C, Petrosino A (2001) Does research design affect study outcomes? Ann Am Acad Polit Soc Sci 578:50–70
  30. Weisburd D, Wycoff L, Ready J, Eck JE, Hinkle JC, Gajewski F (2006) Does crime move around the corner? A controlled study of spatial displacement and diffusion of crime control benefits. Criminology 44(3):549–591

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655