Predictive Policing Research Paper

This sample Predictive Policing Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of criminal justice research paper topics, and browse research paper examples.

Predictive policing is a new concept for law enforcement in the twenty-first century. While still in its infancy and relatively untested, predictive policing has the potential to change the way in which law enforcement deals with crime and victims. This research paper describes predictive policing in terms of its definition and roots, the theories and models that have been developed, applications in law enforcement, and the issues that surround it.

Conceptually, predictive policing involves the use of data and predictive analytics to predict or forecast where and when the next crime or series of crimes will take place. The concept has engendered new terminology in law enforcement. “Predictive analytics,” “data mining,” “nonobvious relationships,” and “predictive spatial analysis” are among the new phrases used by chief executives, policy makers, and researchers to describe aspects of predictive policing. These and other phrases will be described and discussed.

Definition

Bill Bratton et al. (2009) initially defined predictive policing as “any policing strategy or tactic that develops and uses information and advanced analysis to inform forward-thinking crime prevention.” The Los Angeles Police Department and Uchida (2009) refined the definition and said, predictive policing is a “multi-disciplinary, law enforcement-based strategy that brings together advanced technologies, criminological theory, predictive analysis, and tactical operations that ultimately lead to results and outcomes – crime reduction, management efficiency, and safer communities.” Other definitions have emerged as well. Pearsall (2009) sees predictive policing as a “generic term for any crime fighting approach that includes a reliance on information technology (usually crime mapping data and analysis), criminology theory, predictive algorithms, and the use of data to improve crime suppression on the streets.” The National Institute of Justice’s Geospatial Technical Working Group (Wilson et al. 2009) broadly discussed predictive policing as including risk-based assessments, problem solving, crime prevention, and “as a strategy by which uncertainties are minimized and expectations are maximized.”

Fundamentals

Using the LAPD/Uchida definition (Uchida 2009), predictive policing is built on multiple pillars including advanced technologies and analytics from the business world, statistical forecasting, criminological theory, and tactics and operations that are based on problem-oriented, intelligence-led, and hot spot policing. Understanding each of these components and how they converge into predictive policing is the central focus of this research paper.

Advanced Technologies And Analytics: Business Intelligence

Predictive policing is based in part on business intelligence models and data analysis tools used by retailers like Wal-Mart and Amazon, insurance companies, or credit card companies like American Express. Netflix uses predictive analytics and data mining as the basis for its business model. They establish their regional centers, manage their inventory, and market their services based on the predicted behaviors of its customers.

In sports, baseball statisticians developed models and use analytics to predict performance. The Society for American Baseball Research (SABR) founded by Bill James in 1977 includes members who use different statistical models to predict or forecast the performance of individual ballplayers and teams. Though largely ignored by Major League Baseball executives for almost 25 years, sabermetricians became popular when Oakland A’s general manager Billy Beane successfully showed the value of data and analytics to propel the A’s to division championships in 2002 and 2003. Beane’s story was chronicled in Moneyball (Lewis 2004). As a result of Beane’s success, every baseball team now uses some form of statistical analysis to measure performance.

Other sports-related enterprises have followed this trend in using statistics and predictive algorithms. The New England Patriots of the National Football League was one of the first football teams to use analytics to evaluate college players and the Leicester Tigers in the United Kingdom, a two-time European champion in rugby, uses predictive tools to reduce injuries.

The commonalities of these diverse businesses are their use of data mining and predictive analytics. Data mining entails the analysis of data to identify trends, patterns, or relationships among the data. Basically, data mining is about “practical application – application of the algorithms developed by researchers in artificial intelligence, machine learning, computer science, and statistics” (Williams 2011).

Data mining includes the development of a predictive model. For example, Wal-Mart relied upon data from its supply chain to predict future demand. Analysts mined data seeking relationships between weather patterns and customer needs. They found that in anticipation of a large weather event like a hurricane or tropical storm, products like duct tape, bottled water, plywood, and Pop-Tarts are sent to an affected area. While water, tape, and plywood are typical necessities, Pop-Tarts (specifically strawberry ones) were discovered through data mining techniques and serve as an example of a “non-obvious relationship” (Beck and McCue 2009).

Predictive analytics is a fairly broad phrase that describes a variety of statistical and analytical techniques used to develop models for prediction. The form of these predictive models varies, depending on the behavior or event that is being predicted. Most predictive models generate a score, with a higher score indicating a higher likelihood of the given event occurring. Predictive analytics rely on increasingly sophisticated statistical methods, including multivariate analyses like time series and advanced regression models. These techniques enable the analyst to determine trends and relationships that may not be readily apparent or are “nonobvious” relationships.

Criminological Research And Theory

Criminologists have applied forecasting and predictive methods for nearly a hundred years. Berk (2008) notes that research on crime trends in the nineteenth century paved the way for explicit forecasts on both aggregate patterns of crime and on the behavior of individuals. As data and statistical tools have improved over time, a wide variety of work on forecasting and prediction have occurred. He cites 18 examples of forecasting studies between 1928 and 2007 (Berk 2008: 221), many of which forecast behavior of offenders after they have been released from prison and those that forecast prison populations. He cautions that many of the studies were not successful in accurately predicting behavior or crime and that the forecasts have not been properly evaluated.

In addition to Berk’s examination of forecasts, the subtext for predictive policing lies within a number of macro-level, quantitative criminological research studies, including those that use repeat victimization, social disorganization, collective efficacy, strain, economic deprivation, routine activity, deterrence or rational choice, social support, and subcultural theories. These theories use “predictor variables” to forecast and predict crime locations and victims.

The theory of repeat victimization posits that for every type of crime studied (with the exception of homicide), the risk of victimization increases following an initial event, that a small proportion of victims accounts for a large proportion of crime, and where repeat victimizations occur, they usually do so within a brief period of time, allowing for a small window of opportunity for intervention (Johnson et al. 2009).

Theories of social disorganization and collective efficacy are also relevant in identifying and predicting where crime may flourish or not. Sociological literature over the last 30 years establishes the relationship between community characteristics and problem behavior. Areas characterized by high levels of economic disadvantage have long been associated with crime and other forms of disorder. Bursik and Grasmick (1993) argued that concentrated disadvantage and other neighborhood-level indicators of social disorganization create conditions that hamper the development of meaningful pro-social institutions to flourish. Institutions such as schools, churches, formal/informal community groups play an important role in regulating behavior among its residents. Sampson and Groves (1989) found that disorganized neighborhoods were also characterized by low organizational participation, sparse friendship networks, and large numbers of unsupervised youth. Neighborhoods with the most extreme forms of social disorganization also tended to experience greatly reduced levels of collective efficacy. Collective efficacy, which relates to the capacity of residents and other groups to exert levels of social control, has important implications for how neighborhoods are informally managed by residents. Research shows that neighborhoods with higher levels of collective efficacy generally experience lower levels of violence. With poor community organization, the capacity of the communities to develop formal and informal mechanisms of legal social control was diminished. From these findings, one can predict that violent crime may be higher in areas of low collective efficacy and lower in areas of high collective efficacy.

Other criminological theories can be applied to predictive policing models. Environmental criminology, which includes routine activity, rational choice, and situational crime prevention, is the broad, general approach that is most germane. Routine activity theory explains the components of a criminal incident by breaking it down to three basic elements: (1) a likely offender, (2) a suitable target, and (3) the absence of a capable guardian. It is only when these three elements converge in time and space that a crime occurs (Cohen and Felson 1979). This perspective established the spatial and temporal context of criminal events as an important focus of study.

Situational crime prevention involves crime prevention strategies that are aimed at reducing the criminal opportunities that arise from the routines of everyday life. Situational crime prevention assumes that crime is a rational choice by offenders and that crime can be prevented by hardening targets to increase the risks and reduce the rewards. British scholars, led by Ronald Clarke, explored the practical application of this theory in the United Kingdom. They saw that “opportunity” was at the core of crime, and rather than trying to reform offenders, they sought to reduce the criminal opportunities available to the criminal. Thus, they sought to change the environment through target hardening, improving surveillance of areas that might attract crime, and deflecting potential offenders from settings in which crimes might occur.

Criminological theory is important in predictive policing – because it assists in determining the appropriate unit of analysis for prediction, guides the specification of the predictive algorithms, and determines the measures of an intervention or tactic employed by the police.

Problem-Oriented, Intelligence-Led, And Hotspot Policing

Predictive policing is tied directly to principles, research findings, and tactics and operations established in problem-oriented, intelligence led, and hot spot policing. Central to the notion of problem-oriented policing is the idea that police agencies need to become proactive rather than reactive, moving away from simply reacting to citizen calls-for-service, and moving toward identifying and solving the problems that generate excessive calls (Goldstein 1990). For example, research in Minneapolis showed that only 3.3 % of addresses and intersections were responsible for 50.4 % of all calls for which a police car was dispatched (Sherman et al. 1989). Solving the chronic, recurring problems underlying these “hot spots” might reduce crime, fear of crime, and calls-for-service, and improve police community relations. While spatial proximity is one of the ways that individual incidents or calls to the police can be grouped into larger problem types, there are many others. For instance, police might group individual incidents together into larger problem types based on similarities in victim characteristics, offender characteristics, modus operandi, or temporal patterns.

Likewise, hot spot policing implicitly and directly includes predictive components. Weisburd’s notion of “the law of crime concentrations at places” refers to research that shows “irrespective … of the city, or even country examined, … 20 to 25 % of crime is found at only 1 % of the places in a city” (Weisburd 2012). By extension and implication, these are “predictable” locations.

Intelligence-led policing is also closely associated with predictive and problem-oriented policing because they all fit into the community-oriented policing paradigm. Ratcliffe defines intelligence-led policing as “a business model and managerial philosophy where data analysis and crime intelligence are pivotal to an objective, decision-making framework that facilitates crime and problem reduction, disruption and prevention through both strategic management and effective enforcement strategies that target prolific and serious offenders” (Radcliffe 2008).

Predictive Models

A number of models currently exist for predicting crime within the geography of crime literature (Groff and La Vigne 2002) and in texts and articles on statistical and mathematical methods (Berk 2008, 2009; Williams 2011). Prediction models may use a variety of regression models, including linear regression (OLS), partial and stepwise regression, logit or probit regressions, and regression splines. Regression models describe the relationship between the dependent variable (in predictive situations, the variable to be predicted) and independent, explanatory variables. Regression implies some causation and then tests how well the regression model fits the specified relationship. However, the statistical techniques used in predictive analytics are largely untested and have not been rigorously evaluated.

Geography Of Crime

Geographic information systems (GISs) have been used to develop models for forecasting and predicting crime. Over the past decade, these models vary from simple to complex. The simplest forecasting method is common to Compstat, where the hot spots of last week or last month are the hot spots of next week or next month. Crime analysts prepare maps of crime that have already occurred and those maps are used to determine where to deploy officers and where to intervene. Other methods include the use of raster GIS, a method included in ArcView’s Spatial Analyst that interpolates a surface of crime and looks similar to a weather map.

Risk Terrain Modeling (Caplan et al. 2011) uses multiple layers of raster maps relying on the “dynamic interaction between social, physical and behavioral factors that occurs at places.” Risk Terrain Modeling (RTM) will identify particular risk factors for crime. The model will assign a value to every place within a specific geographic area and will signify the presence, absence, or intensity of each risk factor. When multiple layers are combined into a risk terrain map, it provides a composite risk value to particular places. The higher the risk value, the greater the likelihood of a crime occurring at that location. RTM has been applied to shootings, violent crimes, and burglaries in Newark and Philadelphia (Caplan et al. 2011).

Another example of predictive mapping is ProMap, a prospective mathematical model that predicts where burglaries would next occur. Using repeat victimization theory, Johnson et al. (2009) determined that specific grid locations were predicted to have the greatest imminent risk where the greatest number of burglaries clustered in time and space both recently and nearby. The researchers theorized that risk of burglaries diminished over time and used a short time period of 1 week for their temporal interval. They compared the predictive ability of ProMap to two methods – Kernel Density Estimation (KDE) and to maps generated by analysts using police beat geography and data from Merseyside, UK. The ProMap model consistently performed better than the KDE method in identifying 10 %, 25 %, 50 %, and 75 % of burglaries. The mapping approach used by analysts was worse than the KDE method as it “failed to exceed chance for almost very trial” (Johnson et al. 2009).

Swatt (2003) attempted forecasts of burglary and robbery in Omaha on a monthly basis for Census blocks using a Hierarchical Linear Model framework. He found that overdispersed models were overwhelmed by the large number of zero event counts and thus could not accurately forecast either crime. Gorr and Olligschlaeger (2002) compared simple univariate, OLS regression, and neural networks strategies for forecasting calls-for-service in Pittsburgh. Their results did not convincingly demonstrate an advantage for the neural network method. Brown and Kerchner (2000) developed a methodology for forecasting criminal events based on the “feature space” of offender site selection, a set of spatial, temporal, and site characteristics that influence an offender’s decision to engage in crime. This strategy, as well as the strategy outlined by Liu and Brown (2000), outperformed a simple model for predicting breaking and entering, with only limited improvement for auto theft. Recently, Fox and colleagues (2012) have extended this model by incorporating it within a Hierarchical Bayesian Model to introduce spatial and temporal effects. Their results suggest that although the extended model outperforms the original Brown and Kercher (2000) strategy for predicting shortterm variation in assaults at the Census block group level, the added computation time limits the applicability of this strategy.

Statistical Learning Models

“Statistical learning” includes methods that experiment with many different models for data. Statistical learning has a strong focus on data mining and machine learning traditions of data analysis. In describing the use of statistical learning, Berk (2008) begins with regression splines and moves to other models, particularly those that make use of decision trees or classification and regression trees (CARTs). Decision trees or CARTs are the building blocks of data mining. Statisticians have found that using one decision tree model is overly simplistic, so combining multiple models into a single ensemble of models – a forest of trees – works better in prediction (Williams 2011). “Bagging,” “boosting,” and “random forests” are among the algorithms used within statistical learning methods that have been developed for predicting and forecasting.

The random forest algorithm holds substantial promise as a method of forecasting crime using statistical learning. The random forest algorithm is an extension of the decision tree or CART method. The random forest algorithm (see Berk 2008 and Williams 2011) grows a large number of decision trees using samples (500 or more, Berk 2008) from a database. At each stage in CART, a random sample of predictors is used (this is referred to as “bagging,” as a random sample of observations is placed in a “bag”). After the trees are formed, classifications are found by majority vote across trees where the case in question was in the bagged data. In other words, in CART, each tree will develop its own predictive value. When there are 500 trees in a random forest, the majority of trees will determine the prediction. If 300 of 500 trees predict that a crime will occur at a location tomorrow, then it is expected that there is a high likelihood (60 %) that the crime will occur at the location.

The random forest algorithm appears to offer a number of advantages over other methods. First, data do not need to be preprocessed or normalized for use. For example, outliers (defined as extreme deviations in the data) do not need to be expunged or normalized. Second, if there are many input variables, specific selection of those variables does not need to take place because the random forest model targets the most useful variables. Third, “overfitting” of data (when a statistical model describes random error or noise instead of the underlying relationship) is less of a problem because classifications are averaged across a large number of trees (Berk 2008). The random forest model is yet to be tested with police data, so it is not yet known whether and how it can be used for predictive policing. Berk (2011), however, demonstrates its use and value in forecasting parole or probation failures.

Applications

One of the first applications of predictive policing methods occurred in the Richmond (VA) Police Department (RPD) in 2003. With the assistance of vendors, RPD developed a system that included a combination of predictive crime analysis, data mining, geographic information system (GIS) capabilities, and reporting to RPD officers. Data from the RPD records management system were integrated and analyzed using SPSS’s predictive analytic.

Richmond analysts created a model that automatically improved itself and avoided the manual refreshing of variables. Time, day, holidays, weather, moon phases, city events, paydays, and crime records were among the data that were analyzed. RPD developed a custom solution using several different technologies. Information Builders’ WebFocus software included business intelligence capabilities that display criminal activity every 4 hours. Officers received the information at police stations and in squad cars, and real-time alerts were sent by email and text messages. GIS mapping from ESRI and aerial photography from Pictometry provided maps and pictures of the reported incident locations and the surrounding neighborhoods. RPD used SPSS’s Clementine and Predictive Enterprise Services products for data mining capabilities that examine how current crime reports relate to data on past, present, and projected actions (Harris 2008).

Data mining tools and predictive analytics were the key elements in the Richmond effort. Using data mining techniques, which basically churned through data, analysts found “hidden patterns and relationships” (McCue and Parker 2003). According to McCue, hypotheses were created, models were developed, and “what if” questions were posed. The data mining tools allowed RPD to conduct a multitude of activities focusing on crime fighting and efficient use of manpower. The developers of these tools asserted that they added value to (1) deployment, (2) tactical crime analysis, (3) behavioral analysis of violent crime, and (4) officer safety.

In using this approach, the Richmond police deployed officers at different hours of the day, during different seasons, and for varying types of offenses. The tools provided them with unanticipated factors that they would not ordinarily find without data mining. While initially this approach showed promise in Richmond, a new police administration has opted to move away from it and return to more traditional strategies.

LAPD And Santa Cruz Applications

In 2011, the Los Angeles Police Department (LAPD) and the Santa Cruz Police Department (SCPD) made news headlines for their use of predictive analytics developed by mathematicians at UCLA and Santa Clara University. Using methods from seismology, Dr. George Mohler and Dr. Martin Short developed math algorithms that have the potential to predict crime across time and space (Mohler et al. 2011). The algorithms were initially tested using core data from the LAPD. The math model is based on “self-exciting point processes” and repeat victimization theory. That is, crime tends to display similar statistical patterns whereby an initial crime event is followed close in time and close in space by additional crimes. In the case of property crimes, offenders tend to return to the same, or a nearby location to commit more crime in a short period of time to replicate the successes of the prior offense (Townsley et al. 2003). In the case of many violent crimes, repeat offenses occur in short successions in the same places as part of cycles of reprisals and counter-reprisals.

The self-exciting point process model describes the rate at which crime occurs at a given spatial location (e.g., street address) and does so in terms of a background rate and a self-exciting component. The background rate is connected to general features of the local environment, which are thought to be constant relative to the time scales at which criminals make decisions. For example, houses targeted by burglars generally do not change fast enough in their characteristics to deter a burglar who has recently committed a crime there. By contrast, the self-exciting component takes into account the notion that recent crimes at a given location or neighboring locations tend to increase the rate at which crime occurs there. Burglars often learn of goods that they could steal during the process of breaking into and burglarizing a home. They act on this information by returning to victimize that home again a few days later.

In Los Angeles and in Santa Cruz, the math algorithm is tested on burglaries and burglaries from motor vehicles on 3 years of data. The expectation is that there is a background component to burglary rates as well as a strong self-exciting component. Locations targeted for burglary from motor vehicles typically offer both a large numbers of targets (e.g., parking lots, high-density street parking), with high variance in target types (e.g., car types) and goods to steal (e.g., laptops, cell phones, GPS units). Offenders specializing in burglary from motor vehicles will tend to return to the same or nearby locations to replicate previous successes and they will do so over short time intervals, consistent with a self-exciting process. The researchers quantified these background and self-exciting components for burglary from motor vehicles and developed forecasts based on these quantitative models. Forecasts take the form of probabilistic assessment of the likelihood that different locations will experience burglaries. The volume of burglary from motor vehicle crimes and the greater stationarity of the target base suggest that forecasts may be possible at the scale of days and perhaps even finer temporal scales.

The self-exciting point process is being tested in the field in five divisions in Los Angeles. The algorithm identifies 500 by 500 ft areas of probable criminal activity. Each week, 60 areas are designated as predictive policing areas. Using an experimental design, the police captain within each division directs officers to 30 randomly assigned areas to prevent the crimes from occurring and leaves 30 areas as control areas where routine patrol activities take place. Measures include the time spent in each predictive area by “extra patrol” and by regular patrol. Crimes are also measured to determine how many have been prevented. Preliminary findings show that nonviolent crime has dropped in the areas.

A number other agencies are in the process of using predictive analytics to predict crime and locations. These include: Cambridge (MA), Charlotte-Mecklenburg (NC), Charleston, SC. Chicago, Indio (CA), Lincoln (NE), Memphis, Miami-Dade, New York City, Redlands (CA), Salt Lake City, and Shreveport (LA).

Key Issues/Controversies

This section examines three major issues: questions about “Minority Report,” civil liberties, and intelligence gathering; importance of data; and untested predictive analytic methods.

“Minority Report,” Civil Liberties, And Intelligence Gathering

The Steven Spielberg production of “Minority Report” starring Tom Cruise was released in 2002 and tells the story of a futuristic “Department of Pre-Crime” that works out of the US Department of Justice. The film depicts the use of “pre-cogs” who predict crimes as they float in an indoor swimming pool and send telepathic messages that name potential victims and suspects to Justice Department police. Using advanced technology and having access to untold databases, the police investigate the crime before it happens. The police seemingly ignore the 4th Amendment by entering houses without warrants, use biometrics to confirm identities, surveil and enter apartments using spider-like devices that “read” corneas for identification, and when they apprehend a suspect before the crime takes place they arrest him for a “pre-crime.”

The actions in the film led many policymakers, researchers, and law enforcement to ask: “Is predictive policing similar to Minority Report? Are we headed in that direction?” The questions raise a number of issues related to civil liberties, the 4th Amendment’s right to be free from unreasonable searches and seizures, and a concern for the type of information that is or could be collected by law enforcement.

Police departments and executives are keenly aware of the negative implications raised by the film and its application to predictive policing. In Los Angeles, the chief responds publicly and privately that “you can’t break the law to enforce the law” (OurWeekly 2010) and that the police “practice Constitutional policing.” More importantly, the concern for civil liberties has led to discussions with the American Civil Liberties Union (ACLU), the development of policies and procedures, and acknowledgement of the need for training on all facets of predictive policing. As a result of these discussions, the focus of the LAPD effort is centered on preventing specific types of crime and reducing crime in specific locations rather than predicting offender behavior.

Data

As with all uses of analytics, data are among the most critical aspects of predictive policing. The validity and legitimacy of any predictive model depends on the quality and quantity of data available. In law enforcement, data are usually in abundance – calls-for-service, incident reports, arrest reports, field interviews, and other data are collected routinely by most largeand medium-size departments across the country. But these data vary by police agency in terms of consistency and uniformity, and while they are abundant, their quality is unknown. Data need to be cleaned and validated to ensure that the results from predictive analytics are accurate.

While the amount of data may be sufficient for use in a predictive model, the information is often stored on legacy systems that may not be compatible with systems running predictive analytics software. Converting data on these legacy systems to a useable format is a challenge.

Untested Methods

The statistical techniques used in predictive analytics are largely untested and have not been rigorously evaluated. Risk Terrain Modeling, ProMap, neural networks, feature space, statistical learning models, the self-exciting point process, and predictive analytics developed by private vendors (IBM SPSS, SAS, IBI, and others) are yet to be fully tested in the field by independent evaluators.

Conclusion

Though predictive policing remains an untested concept, many police departments have adopted ways to begin to predict crime using mapping, business, and statistical analytic methods. The reasons behind the use of predictive analytics are many, including the need to increase efficiency and maximize police resources. In cities like Los Angeles, where resources are limited because of the downturn in the economy, cost-effective information technology solutions are an essential management tool. Increasing the ability of police agencies to effectively analyze existing and new data may provide broader crime reduction opportunities. This in turn will lead to new deployment strategies, different methods for measuring prevention, and ultimately, safer communities.

Bibliography:

  1. Beck C, McCue C (2009) Predictive policing: what can we learn from Wal-Mart and Amazon about fighting crime in a recession? The Police Chief 76(11)
  2. Berk R (2008) Forecasting methods in crime and justice. Ann Rev Law Soc Sci 4:219–238
  3. Berk R (2009) Statistical learning from a regression perspective. Springer, New York
  4. Berk R (2011) Asymmetric loss functions for forecasting in criminal justice settings. J Quant Criminol 27:107–123
  5. Bowers K, Johnson S, Pease K (2004) Prospective hot spotting: the future of crime mapping? Brit J Criminol 44(5):641–658
  6. Bratton W, Morgan J, Malinowski S (2009) The need for innovation in policing today. Unpublished manuscript, Harvard Executive Sessions, October
  7. Brown DE, Kerchner SH (2000) Spatial-temporal point process models for criminal events. Paper presented at the National Institute of Justice Annual Crime Mapping Conference. Washington, DC. Included in Brown DE (2002) Final report: predictive models for law enforcement. U.S. Department of Justice, Washington, DC
  8. Bursik RJ, Grasmick HG (1993) Neighborhoods and crime: the dimensions of effective community control. Lexington Books, Lanham
  9. Caplan JM, Kennedy LW, Miller J (2011) Risk terrain modeling: brokering criminological theory and GIS methods for crime forecasting. Justice Quarterly 28 (2):360–381
  10. Cohen LE, Felson M (1979) Social change and crime rate trends: a routine activity approach. Am Sociol Rev 44(4):588–605
  11. Fox JS, Huddleston H, Gerber M, Brown DE (2012) Investigating a Bayesian hierarchical framework for feature-space modeling of criminal site-selection problems. Paper presented at the Midwest artificial intelligence and cognitive science conference 2012, Cincinnati, 21–22 Apr 2012. Retrieved 29 May 2012, from http://ceur-ws.org/Vol-841/submission_31.pdf
  12. Goldstein H (1990) Problem-oriented policing. McGraw Hill, New York
  13. Groff ER, La Vigne NG (2002) Forecasting the future of predictive crime mapping. Crime Prevention Studies 13:29–57
  14. Gorr W, Olligschlaeger A (2002) Crime hot spot forecasting: modeling and comparative evaluation. Final report. U.S. Department of justice. National Institute of Justice. Washington, D.C
  15. Harris C (2008) Richmond, Virginia, police department helps lower crime rates with crime prediction software. Retrieved 10 Dec 2009, from http://www. govtech.com/gt/print_article.php?id¼575229
  16. Johnson SD, Bowers KJ, Birks DJ, Pease K (2009) Predictive mapping of crime by ProMap: accuracy, units of analysis, and the environmental backcloth. In: Weisburd D, Bernasco W, Gerben JN, Bruinsma J (eds) Putting crime in its place. Springer, New York
  17. Lewis M (2004) Moneyball: the art of winning an unfair game. W.W. Norton, New York
  18. Liu H, Brown DE (2000) A new point process transition density model for space-time event prediction. In: Brown DE (ed) (2002). Final report. Predictive models for law enforcement. U.S. Department of justice. Washington, DC
  19. McCue C, Parker A (2003) Connecting the dots: data mining and predictive analytics in law enforcement and intelligence analysis. Police Chief 10(10): 115–124
  20. Mohler GO, Short MB, Brantingham PJ, Schoenberg FP, Tita GE (2011) Self-exciting point process modeling of crime. J Am Stat Assoc 106(493):100–108
  21. OurWeekly (2010). Retrieved 5 May 2012, from http:// ourweekly.com/los-angeles/los-angeles-police-chiefcharlie-beck
  22. Pearsall B (2009) Predictive policing: the future of law enforcement? NIJ J/Issue No. 266, at 16 (National Institute of Justice)
  23. Radcliffe J (2008) Intelligence-led policing. Willan, London
  24. Sampson RJ, Groves W (1989) Community structure and crime: testing social disorganization theory. Am J Sociol 94(4):774–802
  25. Sherman LW, Gartin P, Buerger M (1989) Hot spots of predatory crime. Criminology 27(1):27–56
  26. Swatt M (2003) Short-term forecasting of crime for small geographic areas. Unpublished PhD dissertation, University of Nebraska at Omaha, 216 pp
  27. Townsley M, Homel R, Chasling J (2003) Infectious burglaries: A test of near repeat hypothesis. Brit J Criminol 43:615–633
  28. Uchida CD (2009) Predictive policing in Los Angeles: planning and development. A white paper published by Justice & Security Strategies, Inc., December
  29. Weisburd D (2012) Bringing social context back into the equation. Criminol Public Policy 11(2):317
  30. Williams G (2011) Data mining with rattle and R: the art of excating data for knowledge discovery. Springer, New York
  31. Wilson R, Smith SC, Markovic JD, LeBeau JL (2009) Geospatial technical working group: meeting report on predictive policing. US Department of Justice, National Institute of Justice

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655