Identification Technologies in Policing and Proof Research Paper

This sample Identification Technologies in Policing and Proof Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of criminal justice research paper topics, and browse research paper examples.

As citizens we are increasingly observed, traced, and documented in our routine and some of our not-so-routine activities. While it is common for citizens and scholarly commentators to question, and frequently criticize, invasive uses of emerging technologies, it is less common to focus attention on actual technical capacities and the use of “traces” as intelligence and evidence in criminal justice practice. This omission is curious, because there are long-standing problems with identification and proof in legal settings. This research paper discusses the reliability, and therefore evidentiary (or probative) value, of techniques (and technologies) routinely used for surveillance and proof of criminal activities (including terror offenses). Recent work on surveillance and surveillance technologies has tended to focus on their potential to change or disrupt, particularly older, settlements (and values) around privacy and visibility. This research paper endeavors to shift that focus by introducing some of the epistemic dimensions and institutional responses to surveillance technologies, and their role in identification, in order to understand their use as intelligence and evidence, in criminal investigations, plea bargains, and criminal prosecutions. Criminological literatures, particularly surveillance theory, have much to offer discussion of forensic science and forensic medicine, but recent challenges to the value of many technologies and derivative interpretations suggest that threats posed by “Big Brother” – that is, the state’s ability to “watch” – might, in some regards, be more clumsy (and simultaneously complicated, attenuated, and remote) than is often assumed.

Thinking about the role and use of the artifacts (or products) of surveillance in investigative and prosecutorial contexts is revealing because it tends to expose, both directly and indirectly, the limits of the technologies as well as the limits of legal processes. Recent authoritative criticisms of many forensic sciences (e.g., Saks and Koehler 2005; National Research Council 2009) reinforce the importance of questioning claims made by police, forensic scientists, lawyers, and judges. They might also lead us to doubt the value of legal institutions as appropriate regulators (or “gatekeepers”) of surveillance technologies and derivative evidence.

Substantially, our entry begins with a review of three technological assemblages ubiquitous in modern criminal investigations and prosecutions, namely, those associated with the analysis of latent fingerprints, DNA, and security and surveillance images. Reviews of these particular technologies and the transformation of traces into admissible evidence are rewarding because of their distinctive historical trajectories, divergent scientific foundations, and pervasive use across a range of intelligence, investigative, forensic, and administrative settings. Ubiquitous in forensics and surveillance literatures as well as the popular imagination, fingerprints, DNA, and images provide interesting insights and illuminating comparisons to aid our understanding of surveillance technologies and their evidentiary products.

Latent Fingerprints

The identification of individuals is, of course, a problem of long standing to which a variety of technological solutions have been proposed (e.g., Groebner 2007). Concerned with increasingly mobile populations, in the nineteenth century, state governments began to develop bureaucratic systems of identification that were attentive to the storage, deployment, and ability to retrieve information. One of these new systems, fingerprint identification, which was developed nearly simultaneously in colonial India for the control of native populations and Argentina for the control of “criminal classes” immigrating from Europe, was perhaps the first identification technology to develop, inadvertently it seems, a forensic application. While fingerprint identification was based on impressions of the “friction ridge skin” that covers the fingertips created deliberately using ink, it was noted that similar impressions are often deposited inadvertently when individuals touched objects with smooth services. Several different observers proposed that such impressions might provide evidence that a particular individual had touched an object that might, in some cases, prove useful in the investigation and prosecution of crimes. In early cases, the clerks who maintained identification bureaus were called upon to undertake the analysis of such “latent” impressions.

By the 1920s, fingerprint identification had become the world’s dominant system of identification for institutions, like prisons and police departments, charged with keeping track of “criminal” populations. These institutions began assembling large databases of inked fingerprints to prevent apprehended individuals escaping from their criminal pasts by adopting alternative identities (i.e., aliases). Latent fingerprint identification was becoming increasingly common as well. Methods for “dusting” (and photographing) objects in order to visualize latent impressions were developed, and the search for latent prints became a regular feature of police investigations, especially for more serious offenses. However, to be useful in identifying the perpetrator of a crime, latent print identification was dependent either on the selection of a suspect by other investigative means or a painstaking and resource-intensive “cold search” of a local or regional database – often a manual filing system. Nevertheless, fingerprint identification developed into the most trusted and iconic means of both forensic and administrative identification.

In court, the interpretation of latent prints was quickly deemed admissible following a modest degree of resistance by defendants. Courts based admissibility on: (1) a general assumption among informed observers that the evidence must be reliable; (2) the observation, though not systematically recorded or tested, that experts’ conclusions regarding latent prints were rarely refuted post hoc by external, epistemologically superior, facts; (3) an intuitive sense of the very high complexity and variability (or “selectivity,” Champod et al. 2004) of fiction ridge skin itself.

By the 1980s, automated fingerprint identification, which had been under development for decades, had become sufficiently mature to penetrate major law enforcement agencies. This development made database searches much less costly, greatly enhancing the utility of fingerprint databases and the potential value of recovered latent prints.

In the 1990s, influenced largely by the controversies over DNA typing (discussed below), many commentators began to question the legal assumption that the accuracy and discriminability of latent print association could be taken on trust. This rethinking, along with some high-profile errors, led to legal challenges to fingerprint evidence, especially in the United States (Cole 2001). For the most part, these challenges have not persuaded courts to substantially modify their practices, but they have generated renewed attention to fingerprint evidence, in the form of a number of high-profile official inquiries and reports (National Research Council 2009; Campbell 2011; Expert Working Group on Human Factors in Latent Print Analysis 2012), and arguably, they played a role in stimulating new efforts at basic research on the accuracy (Ulery et al. 2011; Tangen et al. 2011) and discriminability (e.g., Neumann et al. 2012) of latent print associations.

DNA Profiling

The development of a method for visualizing “hypervariable” regions of genotypes by Alec Jeffreys in the mid-1980s led to the proposal that genetic variations might be harnessed to provide another means of individual identification. Unlike fingerprinting, the forensic application was apparent – and, indeed, seemed the principal use of the new technique – from the outset. “DNA fingerprinting,” as it was briefly called, could glean identity from genetic material, such as blood or semen. The method was applied to a serial rape-murder case in the vicinity of Jeffreys’s institution, the University of Leicester, and it quickly became apparent that there were many crime scenes where biological material was present but fingerprints were absent. “DNA profiling,” as it was soon renamed, spread quickly, and the technology developed rapidly, with a series of new techniques replacing Jeffreys’s original multi-locus probes. Of particular importance to forensics was the application of the polymerase chain reaction (PCR), developed by Kary Mullis. This allowed for the “amplification” of small amounts of genetic (“trace”) material into volumes sufficient for analysis. Though the initial PCR techniques lacked discrimination, short tandem repeats (STRs) combined PCR with high discrimination. This technique remains the standard technology used today, although more powerful techniques are on the horizon.

As with fingerprints, state law enforcement agencies began assembling databases of genetic profiles. These databases were initially limited to serious sex crimes. Subsequently, they have expanded rapidly such that their eventual replacement of fingerprint databases seems probable. DNA databases have proved more controversial than fingerprint databases, largely based on the (in principle, questionable, Cole 2007) argument that DNA includes more “intimate” information than fingerprints (Krimsky and Simoncelli 2011).

Legal systems have differed sharply in reacting to this challenge. In the United States and the United Kingdom, the state’s interest in public safety has generally trumped privacy concerns pertaining to genetic information. The European Court of Human Rights, however, recognized a fundamental privacy interest in both DNA and fingerprint information and this right, it ruled, demanded the exercise of moderation and proportionality in the state’s retention of such information, a ruling which bound the United Kingdom as well (S. and Marper v. United Kingdom 2008). The European Court was particularly concerned by the potential practice of “familial searching,” in which near correspondence in database searches would be used to generate investigative leads from the blood relatives of those in the database. American courts, in contrast, have generally approved of this practice.

In court, DNA evidence was challenged more forcefully than fingerprinting ever was, and this resulted in some adverse decisions. It is generally agreed that this small number of challenges and adverse rulings (e.g., People v. Castro 1989) stimulated improvement in the science. After a period of public controversy – often characterized as the “DNA wars” – the legal admissibility of routine DNA evidence became non-contentious. Issues remain, however, for cases involving complex mixtures of DNA from several individuals, more exotic techniques like mitochondrial and low copy number DNA profiling, and contamination and biased interpretation (Aronson 2007; Lynch et al. 2008; Kaye 2010).

Image Interpretation (E.G., Of CCTV Recordings And Photographs)

Images have been used in and around legal proceedings and public inquiries (e.g., the Zapruder film at the Warren Commission) for more than a century – initially in the form of photographs, film, and video, and more recently in the guise of animation and “virtual” displays (Feigenson and Spiesel 2009). Photographic images, incorporated into many official documents (e.g. drivers’ licenses, passports, and criminal records), provide important means of identification in administrative, commercial, and social settings. Accepting that images may have considerable potential as sources of information, here it is our intention to focus on their use in legal proceeding for purposes of identification.

In recent decades, especially with the advent of CCTV systems in public spaces, police, investigators, and prosecutors – and to a more limited extent criminal defendants – have begun to use images to piece together criminal activities and to identify the perpetrators of such acts (e.g., Goold 2004). In conjunction with the rapid expansion of security cameras in private spaces (e.g., houses, banks, bars, casinos, convenience stores) and/or quasi-public spaces (e.g., shopping malls, airports, and around ATMs), these systems have produced a very large number of images indexable to criminal activities – ranging from shoplifting to assault, armed robbery, and murder. The availability of images that, unlike DNA and fingerprint evidence, are susceptible to immediate interpretation by investigators (and others) has begun to change the way that criminal acts are investigated and presented to various publics whether as evidence in court or via various media in the public domain. By way of example, intelligence operatives, investigators, and general duty police officers – with limited, if any, specialist training – are often obliged to watch and interpret videos (and images) in order to advance investigations or assemble intelligence. Similarly, newspapers and television are used to enlist an expanded audience that might be capable of identifying individuals or providing investigative leads.

Ordinarily, images related to criminal offenses are admissible and the tribunal of fact is free to interpret the content, which may involve attempting to track events and/or identify those involved. The low quality of many crime-related images has meant that investigators and prosecutors often call upon others to provide supplementary interpretations and guidance. Courts, particularly in common law jurisdictions, have responded unevenly to the admission and use of opinion evidence to assist with identification. Approaches range from jurisdictions such as England – where police (including investigating officers) and a range of individuals, recognized as “face and body mapping” experts (with a background or training in intelligence, medical art, photography, information technology, anatomy or physical anthropology) or gait analysts (usually podiatrists), are permitted to express opinions about the content of images and positively identity persons of interest; to Australia – where police officers are generally prohibited from expressing opinions about identity, but “facial mappers” are allowed to describe similarities between a person of interest and the accused; to Canada – where police, prison and parole officers, possessing some familiarity with the accused, can positively identify them in incriminating images; to the United States – where investigators tend to rely upon quantitative (i.e., photogrammetric) approaches purporting to capture features such as the height or shoe size of offenders (Edmond et al. 2013).

The admission of images and incriminating interpretations of images appears to be an institutional attempt to accommodate the exponential proliferation of potentially probative evidence emerging in recent decades, in conjunction with a naive confidence in the abilities of lay people – whether lawyers, judges, or jurors – to manage image interpretation, and particularly identification, within the confines of the adversarial trial. Opinions about images are routinely admitted. Witnesses are sometimes limited to describing similarities rather than expressing positive opinions about identity. The value of such opinions is ordinarily left for the tribunal of fact – as a matter of weight.

This ready admission and reliance upon images in criminal investigations and prosecutions is revealing, particularly when contrasted to DNA profiles and latent fingerprints. For, there are no standardized or empirically validated approaches to the capture and storage, let alone interpretation, of images. Excepting some photogrammetric methods – with their origins outside of the forensic sciences – we have no indication of the validity or reliability of the variety of techniques and derivative opinions allowed in courts. To put this another way: we do not know how accurate the various “expert” witnesses are, or if they can consistently do what they claim.

Not insignificantly, experimental studies indicate that humans are error prone when asked to compare people in photographs or compare photographs and persons (i.e., typical courtroom scenarios) even when relying upon contemporaneous, high-quality, full-frontal images (Davis and Valentine 2008). Conspicuous exceptions are those with considerable familiarity (e.g., family members and close friends), especially where exposure covers significant periods of time and a variety of circumstances. On an average, when familiars are shown images and/or video and asked to express opinions about identity they tend to be reasonably accurate – even where the quality of images is low and the exposure short (Jenkins et al. 2011). Predictably, familiars are often reluctant to testify and incriminate. Such reticence helps to explain reliance on investigators, the legal recognition (or construction) of “expertise,” and ready admission of opinions.

One factor, applicable to all identification technologies, though perhaps clearest in the forensic use of images, is the beguiling complexity of interpretation. The pervasiveness of photographs, films, and videos, along with the frequency or our exposure, encourages courts to admit images, frequently characterizing them as objective evidence of what transpired. Witnesses and juries are routinely asked or expected to approach images as mechanical reproductions of reality; their interpretation is implicitly straightforward – a relatively mundane or even intuitive activity. There are, however, serious complexities with image interpretation and with the kinds of intertextuality – e.g., the narration of images by prosecutors or expert witnesses during criminal proceedings – that may subtly (and sometimes unconsciously) shape or cue perception and interpretations. In contrast to electropherograms and latent fingerprints, the tribunal of fact is routinely encouraged to undertake its own interpretation of the primary data (i.e., the images), often in conjunction with “expert” opinion and the other evidence. Additional incriminating evidence may implicate the accused and be conveyed to those endeavoring to interpret images – whether analysts or juries – in ways that skew perception and attempts to convey difficulties with image interpretation. The interpretation of unfamiliar images, such as x-rays, fMRI scans, and aerial photographs, confirmed by numerous controversies in the long history of photography and film (e.g., Fenton’s cannon balls and the detention of Rodney King), reinforces just how complex and controversial images and their interpretations can be (e.g., Morris 2011).

Key Issues

At this juncture, we turn to consider our three forensic technologies via a series of analytical frames and themes drawn from surveillance and other criminological and legal literatures.

Accommodation: Legal Responses Are Driven by Technological Advances and the Availability of Incriminating “Evidence” Investigators, prosecutors, and judges tend to be rapid and uncritical adopters of identification technologies, typically embracing their outputs and opinions about their outputs as probative, and implicitly reliable, evidence. Admissibility standards and contests over the probative value of incriminating expert opinions have been ineffective – and at best inconsistent – at excluding unreliable opinions or exposing weaknesses in the interpretation and expression of opinions about fingerprints, DNA profiles, and images.

Courts in most jurisdictions – including those with reliability-based admissibility standards (e.g., in the United States and Canada following Daubert v. Merrell Dow Pharmaceuticals, Inc. 1993) – have been surprisingly uncritical and remarkably accommodating in their responses to opinion evidence derived from new technologies and older technologies subjected to new criticisms. Rather than require empirical support for interpretive techniques and expressions, courts express confidence in the efficacy of traditional legal processes and trial safeguards (see “The Limits of Legal Proceedings”). They have preferred to “grandfather” long-standing techniques (e.g., latent fingerprint comparison), accept witnesses with investigative experience (see “Reliability Discourses” and “Experience and the Reification (or Legal Construction) of Expertise”) or broadly relevant qualifications – along with their self-serving claims – and they have relied primarily on the deconstructive abilities of poorly resourced defense lawyers to expose and convey limitations to lay jurors (and judges), rather than impose responsibility upon the state to demonstrate reliability (i.e., validity and reliability) through reference to empirical studies.

Moreover, once a technology or derivative interpretive technique is admitted in one jurisdiction, admission tends to follow in others. Conditions imposed on the initial admission are frequently elided in subsequent decisions, thereby facilitating technological creep (Risinger 2000).

Weapons Of The Not So Weak: Resistance Is Not Always Useless

Following Scott (1985), a common theme in contemporary surveillance studies concerns the ability of various individuals and publics to resist or subvert surveillance, oversight, and identification. Concerned with visibility, traceability, and often privacy, a range of responses have emerged that, to varying degrees, enable individuals and groups to resist scrutiny or turn methods of surveillance and accountability back on to those who normally “watch” (e.g., sousveillance). The technologies we have selected are illuminating because they are more or less intrusive depending upon how information is captured and used, as well as their actual capabilities – both current and anticipated.

It is argued, for example, that DNA samples and profiles may be used to do more than determine a non-match or a match and a probability (or likelihood) of the accused being the source. DNA samples, which are often retained by government agencies, can theoretically be analyzed in ways that may facilitate discrimination – both positive and negative. Analysis might be used to identify a risk of a particular illness, such as susceptibility to breast cancer, and this might be used to prevent premature death or to discriminate through exclusion from insurance coverage. Many (negative) discriminatory uses are proscribed by constitutional rights or legislative enactments – always susceptible to the possibility that they might be reviewed, circumvented, or interpreted in ways that accommodate new technological capabilities, whether real or imagined.

Conversely, the current limits of interpretation mean that it is far from obvious that the availability of images will enable investigators, and even attentive publics and familiars, to definitively determine what took place and who was involved (e.g., the Vancouver and Tottenham riots). Depending on the cameras, systems, and sophistication of resistance, those capturing images, for whatever reason, may or may not be able to use the images as probative evidence. Techno-legal assemblages (see “Back to the Future: No Escape from Interpretation in Complex Techno-legal Assemblages”) have both technical and socio-legal constraints and limitations.

Anyone remotely conversant with criminal justice knows that some technologies can be resisted (or subverted) through relatively simple techniques such as wearing gloves (against fingerprint collection) or condoms (against DNA typing), or wearing bulky clothing, a hat or balaclava (for cameras) (e.g., Marx 2003). Such strategies of resistance, used by criminals and other citizens to disrupt comparison and identification, may or may not be effective. For, it is increasingly common for a variety of technologies (or artifacts) to be used collectively to assist with identification and proof. Where a DNA match is obtained, or similarity is observed between an image from a robbery and a suspect, an ATM transaction or mobile phone call may be used to place a person of interest in the vicinity of the offense. Moreover, certain forms of resistance (such as a particular disguise) might lead to propensity interpretations as criminal “signatures.” Of course, where suspects offer a potentially innocent explanation or defense – such as consent in a sexual assault complaint – the value of identification technologies may rapidly depreciate along with the need for resistance or critique. Similarly, claims about what happened “off camera” might be useful to raise doubts around the cause of injury or culpability for serious injury resulting from a partially filmed (and therefore fragmentary) assault in a bar or street. This often shifts the hermeneutic exercise from the non-contentious issue of identity to questions about what actually transpired.

While there are means of resisting forensic and surveillance technologies, it is difficult to resist multiple identification technologies in court, especially when they are combined into a coherent narrative. Where a variety of different technological artifacts are aligned, especially if they are presented as independent and corroborative – it has become increasingly difficult to resist the cumulative results of skillfully integrated allegations.

Interestingly, not every criminal act is attended by resistance to the identification of its perpetrator. Many criminal acts are performed directly, some even deliberately, in front of cameras (and/or witnesses) or in ways that are seemingly indifferent to surveillance technologies and investigative possibilities.

Back To The Future: No Escape From Interpretation In Complex Techno-Legal Assemblages

Drawing conspicuously on science and technology studies (STS and SSK) literatures, notably the work of Latour (1987), surveillance studies have embraced the idea of surveillance assemblages (Haggerty and Ericson 2000). That is, the combination of machines/equipment, techniques, procedures (all potentially actants), individuals, institutions, training, traditions and even cultures, as forming part of the assembly of constituent factors structuring the way in which interpretive activities are performed and understood. Although many of these features and dimensions tend to be omitted, or elided, in formal accounts and legal proceedings, their reintroduction provides a useful way of approaching the construction of surveillance and evidentiary artifacts and the potential for them to be opened up and pulled apart (or deconstructed, after Jasanoff 1995) in legal proceedings. The idea of the assemblage, combining human users, technologies, and traditions, reinforces the inescapability of human participation and the need for interpretation across the forensic sciences. Interpretation and risks of error occur across the spectrum of activities from the recognition and collection of traces, the analysis of results, to attempts to ascribe significance in the context of a case or administrative process. Human interpretation is perhaps most conspicuous on the margins: with low copy number DNA analysis, badly degraded DNA samples or results obtained at instrumental thresholds, partial or smudged latent fingerprints, and in responses to badly distorted or very old images.

Even sophisticated automated and semi-automated systems, such as DNA profiling machines or the automated fingerprint databases, require a human interface to interpret results as well as assess their potential significance to an investigation or prosecution. They also depend, as Lynch et al. (2008) explain, on mundane practices – chains of custody, collection and storage practices, training of personnel, accreditation of laboratories, the honesty of investigators and technicians as well as assumptions about the way traces may have been deposited. Rather than simply overcoming known problems, new and more powerful technologies (e.g., low copy number DNA profiling) may actually compound difficulties as smaller and smaller samples can be analyzed in ways that generate more mixed samples and introduce stochastic effects that require new forms of interpretation – and make the need for bias-reducing procedures, such as sequential unmasking, even more urgent.

Notably, fingerprint identification, as traditionally conceptualized, has lacked the “interpretative step,” in which the analyst, having found correspondences between prints, assesses the significance of this finding by estimating the rarity of the corresponding features in some relevant population. Instead, assumptions about fingerprints lead analysts to equate a “match” with the positive identification of a particular individual to the exclusion of all others (i.e., individualization).

The interpretation of images represents an obvious example of the interpretative predicament. Whereas closed biometric systems relying upon images and/or other data such as fingerprints may perform reasonably well – especially on relatively small data sets, where the reference data and application are obtained in controlled conditions – when it comes to determining the identities of unknown individuals appearing in security and surveillance images, things are decidedly more complicated and not necessarily susceptible to algorithmic analysis – at least not without very significant levels of error (Introna and Wood 2004). In most investigations involving images, there are no relevant databases (e.g., R v. Atkins 2009). Rather, those recognized by courts as experts – because they are believed to be able to assist the tribunal of fact – are allowed to express their opinions about similarity and/or identity in the absence of any kind of systematic information about: the frequency of facial (or bodily) features; the independence of different facial features (e.g., lips and nose); and usually in the absence of methods capable of explaining how the analyst overcomes the distortion created by cheap cameras, poor positioning, bad lighting, image perspective, and the generally low levels of data retention.

Reliability Discourses

The epistemic value of many of the forensic sciences, particularly comparison sciences used for purposes of identification, was explicitly questioned after an inquiry by an eminent multidisciplinary committee (of the National Research Council, 2009) established by the National Academy of Science (USA). The committee’s conclusions are revealing. Its report is not only a remarkably critical response to many comparison/identification techniques in routine use, but it also captures something of the light that scientifically predicated DNA techniques have cast on the other comparison “sciences.”

Without wanting to suggest that DNA processes are infallible or even a “gold standard” to be faithfully emulated, it is useful to contrast the validation studies, sophisticated statistical work on population models and probability/likelihood ratio frameworks that have accompanied the largely extralegal negotiations surrounding the refinement of DNA evidence, with the credulous and accommodating reception of many other forensic science techniques – extending beyond latent fingerprint and image comparison to include, among others, ballistics, tool marks, handwriting, dental, hair, footprint, shoeprint, ear print, and tire mark comparison. This accommodating posture predates the introduction of DNA profiling and stubbornly survives its refinement in the 1990s. To varying degrees, non-DNA techniques have yet to produce, and courts yet to stipulate, the need for independent evidence of validation and reliability.

As things stand, and somewhat tautologically, investigators and experts tend to use prior admission, guilty pleas, and convictions as a basis for claims about the reliability of their techniques, interpretations, and evidence. In the absence of demonstrated validity and reliability studies (or independent evidence of proficiency), admission and convictions (however obtained) do not necessarily provide epistemic support for technologies and techniques. Moreover, the complex assemblages associated with “identification” are often obscured or “black boxed” and left to impecunious criminal defendants and their state-funded defense attorneys to consider challenging (or deconstructing) in the risky realm of the (adversarial) criminal trial.

Reviews of wrongful convictions suggest that techniques used to establish identity may sometimes fail to correct misleading investigative information and faulty assumptions, such as mistaken eyewitness accounts or confessions procured under duress (Garrett 2011). Where analysts are exposed to prejudicial information that is not required for their analysis (e.g., the suspect’s prior criminal conduct and/or investigators’ beliefs about culpability), these dangers are likely to be accentuated (Dror et al. 2006).

The Expression Of Results

Considerable effort has been expended on the interpretation of DNA matches and the way results are expressed in reports and in courts. The results of DNA profiling are now routinely expressed in probabilistic terms based on widely accepted population models and statistics. Prominent, scientifically based, refinements to the reporting of DNA matches have created problems for other comparison sciences, including the interpretation of latent fingerprints and images.

Recently, in response to admissibility challenges to opinions based on latent fingerprints and images, several courts have required witnesses to qualify (i.e., read down) their results, sometimes restricting them to descriptions of apparent similarities (so-called splitting). This intervention is revealing. It reinforces the absence of validation and reliability studies able to support the historical form of expression (e.g., individualization and no errors for latent fingerprints) and exposes the willingness of these witnesses, legally recognized as experts, to express strong forms of incriminating opinion in their absence. It also evidences commitment to the admission of opinion evidence and naıve confidence in the ability of the trial to identify and convey limitations.

Curiously, it is highly unusual for forensic scientists to provide an indication of the error rate with the expression of results from most of these techno-legal assemblages. Rather, it seems to be the responsibility of those challenging “identification” evidence to retrospectively identity errors in the collection, storage, analysis, interpretation, and the expression of results.

Experience And The Reification (Or Legal Construction) Of Expertise

Courts have been responsible, to a considerable degree, through the legitimacy conferred by admission, along with the reliance placed on incriminating expert opinion evidence in supporting convictions (and upholding appeals), for recognizing and indeed reifying some forms of experience and/as expertise. Instead of requiring evidence of validity and reliability, courts have tended to accept weaker surrogates such as formal qualifications and experience. Many courts, including those in jurisdictions with formal reliability standards purportedly governing the admission of expert opinion evidence, have tended to place great store on formal qualifications and, particularly, experience along with earlier accommodating admissibility jurisprudence when approaching the admissibility of forensic science and medicine evidence (Edmond et al. 2013).

Rather than require evidence of ability and accuracy, in the form of published experimental studies, judges have preferred to look to practical experience and institutional affiliations (e.g., with the FBI, the FSS, and RCMP). While we accept that experience is an important dimension of expertise, we would contend that in most cases, and certainly with the kinds of technologies discussed in this research paper, that experience alone (or even in conjunction with past “successes” and supplementary incriminating evidence – a strong case) cannot replace the need for demonstrable evidence of reliability. Experience is useful once the value of a technique has been demonstrated empirically.

In consequence, legal institutions have played an important if under-appreciated role in the construction and social legitimation (or “certification”) of forensic science and medical evidence as “knowledge” through admission and allowing reliance to be placed upon techniques and opinions of unknown evidentiary value. Through their failure to impose credible restrictions on admission or adequately explore and convey limitations, courts – and lawyers, in particular – have facilitated admission and encouraged reliance upon techniques that were (or are) not adequately understood, nor demonstrably reliable. Somewhat iatrogenically, this accommodating posture may have actually inhibited study and testing. Disinterest in reliability has undoubtedly contaminated intelligence and investigative practice, actively contributed to wrongful convictions, and allowed serious offenders to remain undetected, free to re-offend (Garrett 2011).

The Limits Of Legal Proceedings

In criminal justice systems, trial courts are conventionally characterized as the appropriate forum to contest and assess expert opinion evidence. They do not generally perform well in these capacities (Edmond and Roach 2011). Indeed, one of the reasons forensic scientists (and forensic medicine and dentistry) have been insufficiently attentive to the reliability of identification technologies and their various uses is that those managing the admission and presentation of evidence have been overconfident and accommodating because of their faith in the ability of legal processes to identify and convey limitations to lay tribunals of fact and courts of appeal. Such accommodating, and empirically indifferent, responses are difficult to reconcile with the expressed goal of “doing justice in the pursuit of truth” (Ho 2008).

Empirical evidence suggests that courts and jurisdictions vary dramatically in their ability to respond to forensic science and medicine evidence, and that individually and cumulatively trial safeguards and protections – such as prosecutorial restraint, (reliability-based) admissibility standards for expert opinion evidence, cross-examination, rebuttal experts, restrictions on the expression of results, directions and warnings to juries, and scope for appeals – are not particularly effective means of addressing problems with expert opinion (Edmond and San Roque 2012). Their limited value is compounded by under-resourced defense lawyers, the state’s (near) monopoly of many types of “expert” evidence and experience (e.g., latent fingerprint analysts), excessive trust in police and investigative agencies, the complexity of many forms of statistical and probabilistic reasoning, and requiring lay persons to assess expert opinion evidence in the context of a criminal trial. There are relatively few well-resourced and technically sophisticated challenges to forensic science techniques and derivative opinions, and correspondingly few incentives for police and forensic scientists to study or improve their techniques.

Intelligence, Investigations, Prosecutions, Evidence, And Proof

In this research paper, we contend that many of the technologies used routinely for surveillance, in investigations and as evidence in criminal proceedings, are not as unproblematic as routinely represented by investigative agencies, prosecutors, judges, and the media (e.g., CSI). Among commentators, considerable attention has been directed to the front-stage: of privacy concerns; and the ways surveillance technologies are adopted, used, and understood by institutions, operators, investigators, and various publics. Typically, less attention is dedicated to the backstage (or inside the “black box”): the value of the technologies and their evidentiary products, the capabilities of legal institutions and personnel, or the models of science, technology, and expertise embodied in the ways they are represented.

Following from the discussion of the epistemic value, particularly the uncertain validity and reliability of many “identification” techniques, we have to wonder about the use of terms such as “intelligence” and “evidence.” Such terms are question begging: requiring further discussion about whether claims can be sustained and the kinds of models of science, technology, and expertise that might be used to support and understand their use in particular contexts and for specific uses. Without wanting to dismiss some kind of epistemic continuum, in practice, it will often be difficult to know precisely where techniques of interpretation might sit. In theory, there seems to be a spectrum from uncertainty (or ignorance) to certainty, with “intelligence,” “evidence,” and “proof,” each respectively situated further along the spectrum toward “certainty.” For the criminal justice system, there are risks, and a corresponding need for caution, if the probative value of the products of surveillance technologies is low or unknown – located toward the “uncertainty” end of the spectrum. This may require a protective attitude, formal protections, and possibly exclusion (or nonuse), particularly if technologies and derivative opinions are not demonstrably reliable.

Regardless of the nomenclature, there may be differences in the suitability of particular techniques, of varying degrees of reliability, to investigations as opposed to trials and criminal proof. Because of a range of explicit commitments and obligations – embodied in rules and procedures as well as constitutional and human rights protections – courts would seem to be constrained in the types of techniques and interpretations they might admit toward proof in criminal proceedings. In principle, criminal proceedings ought to be organized in a manner that embodies the desire for factual accuracy and fairness as well as an express commitment to avoid convicting the innocent.

Intelligence gathering and criminal investigations are not, for good reasons, constrained in the same way. Problems may emerge, however, when a range of speculative and error-prone technologies are available and relied upon uncritically by investigators and/or used as probative evidence in criminal proceedings.

Future Directions

This research paper suggests that the value of surveillance technologies used routinely to observe, monitor, and interpret behaviors, and identify various publics and individuals, may be quite varied, necessarily moderating their suitability for investigations and criminal proof. There appears to be tremendous variation in what those using comparison techniques, whether based on DNA, latent fingerprints, or images (and extending to voice recordings, hair, handwriting, ballistics, tool marks, and so on), can credibly claim. Against expectations, we cannot assume that surveillance technologies, including those in routine use by investigative agencies and courts, are particularly effective. Interestingly, pervasive use and popular impressions reveal little about, and do not correlate with, evidentiary value.

This research paper reveals complexity, and unevenness, in the ways institutions adopt, use, depend upon, and rationalize recourse to a range of identification technologies. It also suggests that even assemblages (including scientifically based technologies such as DNA profiling) unavoidably depend upon humans for the collection and interpretation of results and the attribution of meaning in particular investigative and prosecutorial settings. It seems likely that even with substantial advances in technological capabilities, whether through increasing sensitivity or whole genome sequencing, higher resolution cameras and greater data storage, or more accurate algorithms to enable automated comparisons, we will not eliminate – and probably not dramatically reduce – errors or the role of analysts, interpreters, and others. There may be technological advances, but there will be relatively fewer technological fixes. This seems to mean that the ability of the state (as well as the many corporate and private observers) to monitor, let alone effectively trace and identify citizens in a range of social, commercial, and criminal contexts, may be more constrained than is routinely suggested or believed. Scholars such as Lyon (2008) suggest that we need to develop more balanced responses to surveillance practices and capabilities – recognizing both the threats they may pose to traditions, privacy, and freedom as well as their ability to improve our lives and enhance safety and security. DNA profiling – with its diagnostic, discriminatory, and therapeutic possibilities, its ability to assist with investigations and identify criminals, as well as its exposure of wrongful convictions and the frailty of criminal justice processes – represents an example of the potential value of emerging empirically predicated technologies and their ability to produce valuable, if sometimes unexpected, insights.

This research paper has adopted a somewhat unreflexive approach to technologies – particularly discussion of the validity and reliability of evidentiary products. We accept that the meaning of reliability is a social accomplishment, unavoidably extending beyond experts and domains of expertise to incorporate social, institutional, ideological, and practical dimensions (Bijker et al. 1987). Rather than adopting an essentialist or technocratic approach to the meaning of reliability or the effectiveness of a technology, we encourage analysts to shift focus to the particular use, the setting(s), institutional values, and traditions and even the way negotiations around the meaning and use of technologies ought to be conducted – in particular settings. It is our contention that in criminal justice settings, the state should be able to satisfy an onerous standard – guaranteeing basic trustworthiness – before interpretations derived from surveillance assemblages are admitted as incriminating opinions to assist with proof of identity and guilt. This is a response to the premium placed upon accuracy and fairness, but is also indexed to the frailty of many surveillance assemblages along with emerging evidence about the weakness of criminal trial safeguards. Legal systems have performed poorly in regulating and assessing identification technologies (NRC 2009).

Significantly, we would not necessarily suggest that the standards required of the state in relation to incriminating expert opinion evidence in criminal prosecutions should apply in other settings, such as civil justice, intellectual property, regulatory contexts, or even criminal investigations. Rather, models of science, technology, and expertise should be developed with sensitivity to goals, institutional values, and the risks associated with specific domains.

Bibliography:

  1. Aronson JD (2007) Genetic witness: science, law, and controversy in the making of DNA profiling. Rutgers University Press, New Brunswick
  2. Bijker W, Hughes T, Pinch T (eds) (1987) The social construction of technological systems: new directions in the sociology and history of technology. MIT Press, Cambridge, MA
  3. Campbell A (2011) The fingerprint inquiry report. Scotland AG, Edinburgh. The Scottish Government
  4. Champod C, Lennard C, Margot P, Stoilovic M (2004) Fingerprints and other ridge skin impressions. CRC Press, Boca Raton
  5. Cole SA (2001) Suspect identities: a history of fingerprinting and criminal identification. Harvard University Press, Cambridge, MA
  6. Cole SA (2007) How much justice can technology afford? The impact of DNA technology on equal criminal justice. Sci Publ Pol 34(2):95–107
  7. Cole S (2011) Splitting hairs? Evaluating ‘split testimony’ as an approach to the problem of forensic expert evidence. Syd Law Rev 33:459–485
  8. Davis J, Valentine T (2008) CCTV on trial: matching video images with the defendant in the dock. Appl Cognitive Psych 23:482–505
  9. Dror I et al (2006) Contextual information renders experts vulnerable to making erroneous identifications. Forensic Sci Int 156:74–78
  10. Edmond G, Roach K (2011) A contextual approach to the admissibility of the state’s forensic science and medical evidence. U Toronto Law J 61: 343–409
  11. Edmond G, San Roque M (2012) The cool crucible: forensic science evidence and the frailty of the criminal trial. Curr Issues Crim Justice 24:51–68
  12. Edmond G, Cole SA, Cunliffe E, Roberts A (2013) Admissibility compared. University of Denver Criminal Law Review (in press) Expert Working Group on Human Factors in Latent
  13. Print Analysis (2012) Latent print examination and human factors: improving the practice through a systems approach. U.S. Department of Commerce, National Institute of Standards and Technology
  14. Feigenson N, Spiesel C (2009) Law on display: the digital transformation of legal persuasion and judgment. New York University Press, New York
  15. Garrett B (2011) Convicting the innocent: where criminal prosecutions go wrong. Harvard University Press, Cambridge, MA
  16. Goold B (2004) CCTV and policing. Oxford University Press, Oxford
  17. Groebner V (2007) Who are you? Identification, deception, and surveillance in early modern Europe (trans: Kyburz M, Peck J). Zone, New York
  18. Haggerty KD, Ericson RV (2000) The surveillant assemblage. Br J Sociol 51(4):605–622
  19. Ho HL (2008) A philosophy of evidence law: justice in the search for truth. Oxford University Press, Oxford
  20. Introna LD, Wood D (2004) Picturing algorithmic surveillance: the politics of facial recognition systems. Surveill Soc 2:177–198
  21. Jasanoff S (1995) Science at the bar. Harvard University Press, Cambridge, MA
  22. Jenkins R, White D, Van Montfort X, Burton AM (2011) Variability in photos of the same face. Cognition 121(3):313–323
  23. Kaye DH (2010) The double helix and the law of evidence. Harvard University Press, Cambridge
  24. Krimsky S, Simoncelli T (2011) Genetic justice: DNA data banks, criminal investigations, and civil liberties. Columbia University Press, New York
  25. Latour B (1987) Science in action. Harvard University Press, Cambridge, MA
  26. Lynch M, Cole SA, McNally R, Jordan K (2008) Truth machine: the contentious history of DNA fingerprinting. University of Chicago Press, Chicago
  27. Lyon D (2008) Surveillance studies: an overview. Polity, Cambridge
  28. Marx G (2003) A tack in the shoe: neutralizing and resisting the new surveillance. J Soc Issues 59(2):369–390
  29. Morris E (2011) Believing is seeing: observations on the mysteries of photography. Penguin, New York
  30. National Research Council (2009) Strengthening forensic science in the United States: a path forward. The National Academies, Washington, DC, Community CoItNotFS
  31. Neumann C, Evett IW, Skerrett J (2012) Quantifying the weight of evidence from a forensic fingerprint comparison: a new paradigm. J Roy Stat Soc A 175(2):1–26
  32. Risinger M (2000) Navigating expert reliability: are criminal standards of certainty being left on the dock? Albany Law Rev 64:99–152
  33. Saks M, Koehler J (2005) The coming paradigm shift in forensic identification science. Science 309:892–895
  34. Scott JC (1985) Weapons of the weak: everyday forms of peasant resistance. Yale University Press, New Haven
  35. Tangen J, Thompson M, McCarthy D (2011) Identifying Fingerprint Expertise. Psychological Science 22:995–997
  36. Torpey J, Caplan J (eds) (2001) Documenting individual identity: the development of state practices since the French revolution. Princeton University Press, Princeton
  37. Ulery B, Hicklin RA, Buscaglia J, Roberts MA (2011) Accuracy and reliability of forensic latent fingerprint decisions. Proc Natl Acad Sci 108(19):7733–7773

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655