Detecting Deception With fMRI Research Paper

This sample Detecting Deception With fMRI Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of criminal justice research paper topics, and browse research paper examples.


The application of contemporary neuroimaging technologies to the detection of deception has garnered popular attention in recent years. Members of the scientific community have proposed that functional magnetic resonance imaging (fMRI) can be employed as signal detectors to predict behavior and cognitive states. This endeavor is often discussed with a tone of hopeful optimism, but it must be considered with adequate scientific rigor, proper understanding of the limitations of the tool being used, and good social responsibility. In the formative years of the field of criminology in the late nineteenth century, attempts to unify imaging techniques and physiological data for the purpose of human classification yielded questionable results and undesirable social influences. It is necessary not to repeat past mistakes in the excitement over a novel technique.

Imaging, Data, And Criminal Classification

The intersection of imaging, behavioral science, classification, and criminology has origins that date back to the late nineteenth century, when Sir Francis Galton described methods of composite photographic portraiture, a technical innovation at the forefront of imaging technology in its time (Galton 1879). Galton hoped to apply this tool to “elicit the principal criminal types by methods of optical superimposition of [.. .] portraits,” in other words, to identify through composite imaging the physical appearance of a typical criminal. These experiments, though embraced with optimism by the scientific community, proved to be a failure. Galton ultimately observed that “.. .the features of the composites are much better looking that those of the components. The special villainous irregularities in the latter have disappeared, and the common humanity that underlies them has prevailed.” While Galton’s technical developments are impressive in their detail and rigor, his broader perspective on the applications of his techniques was flawed. Nearly a century later, scientist and historian Stephen Jay Gould would appropriately label Galton an “apostle of quantification,” suggesting that his tireless measurements of human physical features in the attempt to achieve classification were simply too narrow in scope to serve as reliable models for the natural world (Gould 1981). Companions to Galton’s experiments with composite portraiture were the notions of physiognomy and eugenics – ideas that suggested that human characteristics were biologically determined and socially controllable. These hypotheses not only failed to achieve any scientific credence in the twentieth century but also demonstrated the socially dangers political malleability of unsubstantial scientific ideas in later decades through their misappropriation.

The well-known nineteenth-century criminologist Cesare Lombroso was also no stranger to physiognomy and eugenics (Lombroso 1876–1897). A proponent of biological determinism and physiological classification, Lombroso’s early scientific hypotheses extended from phrenology, a practice (now widely discredited) that attempted to predict and classify human characteristics through measurements of the shape of the skull (Gray 2004). This putative science is a distinct example of a sound fundamental theory (an existing relationship between the brain, personality, emotion, and behavior) coupled with a thoroughly unsound notion of how it could be measured and applied. Instead of being treated as a scientifically falsifiable hypothesis, phrenology was practiced under the assumption of its validity. Lombroso’s foundational text also addressed numerous other external features that could be observed, quantified, and described in relation to the criminal type, such as tattoo markings on the body, linguistic and emotional behaviors, and nuances of handwriting. It is no surprise, given his interest in the indexical tabulation of observable features, that Lombroso dedicated a chapter of the 1884 edition of his text to “Photographs of Born Criminals,” proposing that images could be employed to profile a criminal type.

Late nineteenth-century science was no doubt catalyzed by rapidly evolving complementary technological innovations such as advances in photographic techniques and printing that had, to that point, undergone several decades of development and refinement. The period was characterized by great eclecticism and imagination, though these positivist pursuits were not always tempered by healthy social skepticism; Galton and Lombroso were relatively unconfined to particular specializations, and their research practices branched into some peculiar explorations. Indeed, Lombroso’s final project (published posthumously) was a positivist inquiry into hypnotism and spiritual phenomena, complete with photographic “evidence” that photographic technology enabled the visualization of ghosts, phantasms, apparitions, and ectoplasms (Violi 2004). Lombroso had been wary of such investigations earlier in his career, changing his position in the early 1890s after taking to attending se´ances. His reputation as a scientist ultimately overcame any skepticism about his association with the field of paranormal investigation, his authority instead lending credence to the field instead of being debased by it. Though not completely estranged from what was scientifically acceptable at the time, paranormal studies, like physiognomy and eugenics, have remained scientifically unsubstantiated over a century hence (Porter 2003).

Galton and Lombroso’s particular attempts to unify physiognomy, photography, statistics, and criminology, though ambitious, were alloyed by bunk assumptions and demonstrably misguided, regardless of any lasting contributions they may have contributed in other fields. Galton, particularly, was a brilliant technical innovator, but failed to recognize that a useful tool is not tantamount to a useful model of human nature. Both eugenics and physiognomy, despite a general enthusiasm about their potential among the scientific communities of their times, have been demonstrated to be at best untenable, and at worse socially dangerous as evidenced by their appropriation in the false sciences of the Nazi party as propaganda to justify monstrous ideologies and acts of mass murder (Gray 2004). Such unsettling potential branches of the ideals of scientific positivism underscore the crucial importance of responsible social vision, good sense, and integrity in any applied science, despite any promises it may appear to offer for the immediate future.

As easy as it may seem to dismiss bad ideas from century-old science as antiquated and irrelevant in the present time, it is important not to fall into similarly narrow interpretations clouded by enthusiasm about current technological prospects. Functional magnetic resonance imaging (fMRI), the most contemporary imaging technology employed at the intersection of physics, physiology, psychology, and neuroscience, offers great promise to future developments in scientific understanding. However, improper application and misunderstanding of this tool undermine its potential; its technical complexity can grant it a false credibility that risks to be passed off to the public under the authoritative moniker of science.

Clarifying A Complex Tool

Commercial firms offering for-profit fMRI lie detection services are optimistic about its promise and technical merits. fMRI has been described in promotional material as a “direct measure of truth verification,” an “unbiased method for the detection of deception and other information stored in the brain,” a means to investigate “the science behind the truth” and “provide independent, scientific validation that someone is telling the truth.” While the marketing decision to shift language from “lie detection” to “the science behind the truth” may be a clever one, it distracts from the important fact that a technology or procedure, no matter how sophisticated, cannot justify a model of nature that is fundamentally inaccurate. MRI is, simply, a tool, like a camera or a microscope, which can offer insights into the workings of the mind and brain when properly applied and reasonably interpreted in conjunction with other forms of inquiry. It is important to distinguish the use of fMRI to detect a signal, as in the case of deception detection, from the practice of scientific research, which attempts to test and refine abstract models of nature by repeatedly testing falsifiable hypotheses; using fMRI for signal detection is, rather, a kind of engineering, an attempt to develop a means to perform a desired function. An elephant in the room in fMRI diagnostics is a simple question: Is the putative signal, in fact, what it is assumed to be? Another elephant, perhaps the next room over, might follow with: Even if so, and even if it can be detected, is the proposed application an appropriate use of this resource? In any case, a clearer understanding of fMRI technology is in order in order to consider its potential, its limitations, and how it ought best (or ought not) be used.

In the interest of demystifying a complex and sophisticated technology, it can be considered in more familiar terms. An MRI brain image is similar in ways to a commonplace photograph. Indeed, some MRI technicians, when communicating to lay participants, refer to MRI image acquisition with the familiar language of “taking pictures.” This metaphor is apt; MRI is, much like a camera, a means to record an index of a space within a given field of view (this technical term, field of view, is used in both photography and MRI imaging). In the case of photography, the field of view is determined by optics and perspective; an image of a three-dimensional object is projected onto a two-dimensional plane and recorded through some technical means. The photographic image is an index of a visual space, a record of the phenomenon of light, though an incomplete or slightly distorted version of it due to its optical projection. A two-dimensional photograph can never, despite any level of technical or optical sophistication, fully represent the three-dimensional space from which it originated (Arnheim 1954/1974). Furthermore, as simple as it may seem, it is important to distinguish the constitution of the image from its content: the image is often described as “the reality,” but it is, in fact, an image and an index, not the reality itself. This consideration is important in relationship to the claim that an imaging technology self-validates by bringing the viewer “closer to the source.” The translation of this problem to fMRI is elementary, but easy to overlook. Technology may afford us the ability to represent (“visualize”) what was not possible in the past, but it is important not to confuse a model with what it represents. Scientific models, and even scientific images, will always contain some level abstraction, and in MRI this is absolutely the case. Images analyzed in fMRI studies are parts of a model: a complex, multilayered index of brain activity, and throughout the analysis process, this index becomes ever more abstracted. This abstraction is extremely useful for testing and interpreting results and formulating theories according to the most reasonable interpretations, but this image evidence, by the very structure of the scientific method, is not simplistic, hard, nor immutable. It is, at best, a means by which to formulate theories of what is likely to be happening in a system that is not directly observable.

Some basic misconceptions that surface in the popular discussion of fMRI can be done away with through a better basic understanding of the technology. For one, it does not index “blood flow,” as many descriptions claim (including those published in certain commercial fMRI detection promotional material). Functional MRI signals are indices of shifts in local blood oxygenation levels, which in turn are interpreted as indices of neural activity due to the consumption of energy (via oxygen) during neural activity. The particular relationship between the immediate shift in local blood oxygenation levels and neural activity is not very well understood, but at the very least, the measure should be identified the same way it is by those who study it: the BOLD (blood oxygen level dependent) signal. A more specific model for the brain activity presumed to underlie the BOLD signal is at the cellular level, where neurons (brain cells) consisting of an axon tail extending from the cell body send electrical potentials (propagating voltages) along the axonal membrane. At the terminus of the axon, the signals influence neurotransmitter releases across intercellular space to receptors on the dendrites of other cells (receptors), leading to the buildup and propagation of subsequent electrical pulses by the recipient cell. This activity, compounded on the order of millions in cellular groups (ganglia), requires energy for fundamental physiological processes: some that are typical to cellular operation – such as metabolism – as well as some functions specialized to the neuron, such as the operation of “pumps” that move positive and negative charges across the cell membranes to propagate electrical signals.

The fundamental physics of MRI involve the scanner’s sensitivity to differential alignment of atomic nuclei, which are manipulated by pulsing radiofrequency signals through an object in extremely strong magnetic field. Extending the photographic analogy, while a photograph is a record of light (photo: light; graph: drawing) and an MRI image is like a kind of “atomograph,” a means of “drawing” an image of the nuclei of atoms in the magnetic field. Indices of blood oxygenation levels collected in fMRI are indices of shifts in the shape of hemoglobin, a macromolecule in the blood that carries oxygen to supply cells with energy. Hemoglobin takes different forms depending on whether or not it is carrying oxygen (it is either “oxygenated” or “deoxygenated”), and the MRI scanner, with its atomic recording properties, can be tuned to detect the differences in the molecule in these respective states. In order to create three-dimensional images, the scanner collects a sequence of two-dimensional slice images in sequence that are subsequently stacked into a three-dimensional volume, wherein the two-dimensional pixels constituting each slice become three-dimensional voxels according to the spacing parameters of the slices. The “functional” term (the “f” in fMRI) involves the incorporation of a time factor – the scanner collects a sequence of brain volume images, which are later reconstructed as a time series of three-dimensional volumes, just as a film is simply a sequence of still photographs. Functional MRI studies typically use two different types of image acquisition. A high-resolution anatomical image is acquired, which is of relatively precise detail, with cubes of around 1 mm per side, but requires several minutes to acquire, like a long exposure photograph. The functional images are acquired rapidly, with an entire brain volume (usually 30–40 slices) collected in 2–3 s. Because of the need for rapid acquisition, the images are much coarser, with the brain volume segmented into cubes around 3–4 mm per side. This trade-off of image quality for time analogous to photography and motion pictures; anyone who has operated digital still and video cameras has probably noticed the difference in image quality between the two formats, with reductions necessary in order to stream video through a limited bandwidth.

Though the BOLD signal is the starting point in fMRI analysis, these signals undergo a complex series of analytical processes and, ultimately, are translated to three-dimensional statistical maps distributed through a model of the brain. Data processing is typically described in two general stages: preprocessing and analysis. Preprocessing is a set of computations that are performed on the data time series to correct for temporal and spatial errors that might have occurred during data collection, improve signal-to-noise ratio in the BOLD signal, and to spatially normalize (“warp”) data to a physiological template from which specific neuroanatomical regions can be estimated. Such steps may include (but not be limited to) slice-time correction to account for offsets in the serial acquisition of the slices comprising each three-dimensional image brain image in each temporal sample (think of it as a “frame” in a three-dimensional “brain movie”); co-registration of each three-dimensional map in the time series (an alignment of each “frame” of the “brain movie”); temporal filtering or linear de-trending of the time series (removal of “drift” artifacts that are generated by the scanner but not a product of any physiological activity); application of a Gaussian spatial smoothing to increase signal-to-noise ratio in regions of the brain presumed to be active at a scale larger than a single voxel; spatial normalization (a computational warping) to a documented anatomical coordinate system such as the Talairach-Tournoux or Montreal Neurological Institute template, resampling each voxel’s native dimension to an isometric space of voxels of equal dimension; and finally any normalization of the signal to a standard measurement scale such as percent signal change. After preprocessing is completed, a statistical map of response to experimental conditions is obtained, typically by applying the general linear model (GLM) to estimate parameters given the particular design model of the stimulus timing presentation. The GLM is the application of a parametric model which accounts for the different factors in the study design by modeling a hypothesized response (referred to as the hemodynamic response or impulse response function) which is a set of statistical values determined by the fit between empirical data and the model. This series of fit values (beta values) is then mapped into the voxel space of the brain image, providing a statistical parametric map of the activity in each region. These beta values are then entered into subsequent calculations in order to test various contrasts between conditions. Subsequent statistical tests are carried out by averaging data using regions spatially defined according to a priori criteria, or tests are carried out independently within each voxel in the cortex, the results of which must be statistically controlled for the tens of thousands of repeated measures to obtain a corrected probability of error in the result (Nichols 2012).

As is the case in contemporary science, fMRI research involves the compounding of layers upon layers of theoretical models and can be misleading to characterize any of this as direct measures of reality. Indeed, in science we can hope to identify what is most likely and expect that we might always be wrong. A margin for error is always part of any quantitative model. It is not uncommon for a thorough research group to require several months to design and administer an fMRI study, refine the analyses, and develop a reasonable interpretation of the results prior submitting them to a peer review panel for publication, after which of course they would be open to discussion in the broader scientific community.

The administration of a complete fMRI study is by no means as simple as administering a traditional polygraph test, and even the fundamental task of data collection is wrought with caveats. Slight head movements can cause degradation of the signal or compromise the integrity of the spatial mapping of BOLD signal. It is not unusual for respondents with head motion greater than 8 mm (approximately one-third of an inch) in any direction to be discarded from a study due to loss of data integrity, despite the considerable cost to the researcher, a challenge heightened by the confined space of the magnet bore in which respondents must remain motionless during scanning. Thus, it is important to maximize the relative comfort of respondents in an otherwise uncomfortable circumstance; claustrophobia is a disqualifying factor in the selection of participants. Hearing protection should be worn, as acoustic noise can reach 120 dB(A), a sound pressure level adequate to cause hearing damage. Nevertheless, despite the discomfort of complete immobility in an enclosed space and extreme levels of auditory noise, participants can fall asleep during scanning, yielding useless (but expensive) data. The magnet, typically 3

Tesla in strength, emits a powerful magnetic field. As such, any ferromagnetic material in proximity to the magnet is extremely dangerous, both to the individual being scanned and the administrators of the scan. At risk of serious injury and/or death, it is impossible to scan any individual with metal implants, microscopic ferromagnetic remnants that might be present in the skin or mucous membranes (a concern for anyone who has spent appreciable time in metalworking or industrial settings), or even large tattoos which can contain trace amounts of iron oxides which can cause serious dermal burns (followers of Lombroso who would wish to use fMRI technology for criminal classification could only hope his suggested relationship between body markings and criminal behavior is specious, as his tattooed criminals would not qualify for interrogation.) Other risks are the generation of body heat caused by electromagnetic radiofrequency excitation and the risk of toxic cryogenic super-cooling materials that could be released in the event of a technical malfunction of the magnet. Typically, fMRI magnets are maintained by institutions (such as hospitals) that keep a support staff of physicists, radiologists, and skilled technicians, supporting costs by renting scanning time to researchers and physicians with a considerable maintenance budget. A dedicated in-house fMRI lie detector would be wasteful and impractical, as the best model for such a resource-heavy device is shared use by a broad set of researchers and physicians with adequate budgets to support its administration costs.

Signal And Specificity

The complexity of collecting and analyzing functional MRI data raises a question: Why would such a technology be suited for the detection of deception? While brain imaging may appear to be a direct measure of cognitive activity, it would be better considered as an index of an extremely complex neurophysiological system from which cognition originates. Despite the expense and complexity of the technology and its power for making inferences about physiological activity from studies of groups of individuals, it has yet to be demonstrated as a reliable signal detector at the level of individuals, and its use in individual diagnosis still faces numerous challenges.

Any signal detection model has three possible outcomes: a Hit, in which the signal is present (the respondent is lying) and is properly identified (the detector registers a lie); a Miss, in which the signal (lie) is present but is not identified (the detector registers as “not-lie”); and a False Alarm, in which the signal is not present (the respondent is telling the truth) but is improperly identified (the detector registers a lie). In the extrapolation of such a model to the interrogation of suspects, the imperative is clear that such a signal detector would have to be perfect to avoid wrongful convictions (False Alarms).

Furthermore, a reliable signal detector needs to satisfy four criteria. First, it must be sensitive to the signal that it attempts to detect. Second, it must be specific enough to detect the signal that it purports to detect and should not detect a more abstract corollary of that signal. Third, it must be generalizable across individuals and contexts – it must not be a function of a given circumstance or sample of the population but must work in a variety of scenarios. Fourth, it must be robust enough to resist attempts to intervene on the signal – in the case of lie detection, this could be noise introduced into the signal or attempts by the participant to “deceive” the signal by enacting countermeasures, be they cognitive or physiological.

It is worthwhile to consider some examples from the extant research literature on functional imaging and classification. Numerous studies have endeavored to identify the network of neural activity involved in truth telling and lying, and there is converging evidence that such a network can be identified as the likely set of neural components active during the cognitive state of lying. Research suggests that a network involving the ventrolateral prefrontal cortex, dorsolateral prefrontal cortex, dorsomedial prefrontal cortex, and anterior cingulated cortex is active during intentional deception (Phan et al. 2005). These regions, or a subset of them, have been discussed as possible target regions from which individual lies can be identified (Abe et al. 2007; Mohamed et al. 2006). At this point, it is necessary to clarify the practice of group classification from the individual. Of the voluminous body of functional MRI research, the vast majority of studies involve group-level studies to identify brain regions active in a given task. The aforementioned results provide evidence that these regions, identified at the group-level, may be active during lying, but do not attempt classification at the individual level. Abe et al. (2007) did report lie classification at a 92 % success rate, but in fact used the traditional physiological polygraph to obtain this result (with a small sample of only 6 subjects), and simply reported aggregate results of the accompanying fMRI study without classification, underscoring the challenge fMRI faces to improve on such results and satisfy its promise of meeting the necessary standards of reliability, generalizability, and robustness.

Individual-level classification has been carried out by several researchers using sophisticated within-subjects paradigms that creatively extend the traditional group-level analysis methods employed in most functional MRI research. Kozel and colleagues reported an individual classification success rate of 90–93 %, and rates between 71 % and 85 % have been reported in other research (Langleben et al. 2005; Monteleone et al. 2009). These results suggest that fMRI is indeed well better than chance but remains imperfect. Furthermore, it is only with the utmost rigor that the network of regions employed in classification be distinguished from its role in other processes; the set of regions used to identify lies are just as likely active in a variety of other social and cognitive processes.

At the individual level, even if fMRI signal detectors are far from perfect, they offer the promise to satisfy the criterion of sensitivity. This, however, could also be said of the traditional polygraph operated by an expert, as classification rates above 90 % have also been reported using this method (Abe et al. 2007). The inability of the traditional polygraph to meet the requirements of generalizability and robustness is clear (Cacioppo 2003), and fMRI methods, despite their sophistication, still face these same challenges. According to the best of our limited understanding of the nature of cortical networks and function, fMRI could just as well prove a failure in terms of specificity. While the aggregate analysis of fMRI data is scientifically sound as a way to consider evidence that could converge on a better understanding of neural function through repeated studies and aggregates of results from research from other levels of modeling neural function (such as cellular-level recordings), the appropriation of specific task-related aggregate brain images to individual classification could risk to be no more justified than Galton’s attempts to use aggregate photography to classify criminals. Over a century after Galton’s failed imaging classification experiments, a surprisingly similar problem appears: the classifier must consider an aggregate image and make a reasonable conclusion about what it is and what it is not. One must consider that individual variability may not be accurately predictable in every case from an aggregate.

The difficulty in reliably identifying individuals from a putative network suggests that the network is not, in fact, specific to delivering false information, and alternative networks may be active during the presumed lie state. This is sensible, as the construct of “lying” is, in fact, a nonspecific term itself. Though adequate to describe a particular behavior, the colloquial idea of a lie must be reconsidered according to cognitive neuroscience models’ neural and mental function. For example, lying could be considered a complex cognitive state composed of processes including (but not limited to) language, memory, and higher-order reasoning. Furthermore, the network of regions described by researchers as the “lie network” is by no means specific; these regions have been demonstrated to be active in a variety of cognitive tasks. For example, a useful counterpoint to any enthusiasm over the specificity of the prefrontal cortex loses momentum when we consider the other possibilities of the complex range of functions the prefrontal cortex plays in neural and mental function, from well-documented cases of its role in emotional processing (Damasio, 1994) to its function even in the “blank-slate” default network of brain function that is active when no cognitive task is at hand (Raichle and Snyder 2007).

The specificity problem is perhaps the most often overlooked amidst the enthusiasm about diagnostic fMRI. Given the complexity of brain activity, the limited understanding of it, and the index-of-an-index structure of the fMRI model, it would be foolhardy to presume that any signal collected in the brain is, a priori, specific to a particular behavior taking place when the signal was recorded. The theory that cortical function is not fixed, but instead is dynamic, modular, and adaptable, is not a new one. Over a half-century ago, neurologist Karl Lashley dedicated decades of research to the search for “the engram,” a cortical trace of memory that could be defined in rodents. After years of carefully controlled experiments failed to detect a locus for a particular memory, even through the methodical excision of very part of the rodent cortex, he concluded that “there is no demonstrable localization of a memory trace” (Lashley 1950). Since Lashley’s work, cognitive neuroscientists have embraced distributed processing models of brain activity, which propose that the brain may best be understood as indices of subcomponents of cognitive processes, instead of representing specific behaviors, thoughts, or cognitive states (Rumelhart et al. 1986). While regions participating in a network may be more reliable in their specificity of certain components, it could be reductive, limiting, or simply inaccurate to claim that a given region is tied to a particular module of a cognitive process.

Indeed, cognitive neuroscientists face the daunting challenge of synthesizing models of neural function from a variety of levels of inquiry: molecular, unicellular, multicellular, physiological, electrical, and computational, in the attempt to reconcile some sense of understanding of how a mass of trillions of electrically charged biological units give rise to the phenomena of cognition, emotion, and behavior. While estimates of the number of cells in the human brain are on the order of a hundred billion, the consideration of the interconnectivity of transmitted signals at the subcellular level is much larger, on the order of the hundred trillions. Given that these connections at the subcellular level are hypothesized to code neural information, the spatial precision fMRI signal, on the order of the size of a pea, is considerably coarse in comparison to the microscopic scale of individual neural connections. In light of this, functional MRI is but one imperfect tool in an extremely complex puzzle pursuing converging lines of evidence to best suggest the relationship between the physiology of the brain and the abstract functions of the mind. To reduce this endeavor to the simple detection of deception would be not only reductive but would propagate an oversimplification that would be counterproductive to scientific understanding.

Broader Cognitive And Social Caveats

Given the questionable scientific substance of lie detection, a reliable lie detector remains an idea of fiction. Presuming, however, that such a detector could even be devised lends little support for whether it would be prudent to implement it. Lombroso’s ideas of biological determinism are reflected in other outmoded ideas from early twentieth-century scientific theories such as behaviorism, which still hold influence on some models of personality and social behavior. There exists a risk to overestimate the “nature” side of a dichotomous nature/nurture model for human behavior. It may be tempting to presume that physiology and the brain should be compartmentalized simply as a phenomenon of nature, but the brain is not a static entity. As a functional organ, a system of complex, time-dependent network signals, the brain appears to be anything but a fixed, deterministic system. Though brain structures may be essentially fixed, that does not mean brain function cannot be molded and modulated through experience and subject social influences. Furthermore, even consciousness and memory themselves are unreliable, as brilliantly demonstrated in the work of Elizabeth Loftus (Loftus 2003). Even if a sophisticated signal detector were to exist to trace the neural register of conscious memory, what a subject deems to be “the truth” cannot always be trusted.

Returning to the subject of fiction, literature has long warned of the dangerous malleability of “the truth” in a technocracy. This idea is brilliantly described in the dystrophic vision of George Orwell’s classic, Nineteen Eighty-Four. The Orwellian “Ministry of Truth” is, of course, an institution for the ironic promulgation of blatant falsehoods. Orwell, who was born at the end of Galton and Lombroso’s generation and wrote his most visionary work during the turbulence of World War II (a decade replete with Nazi pseudoscience), was prescient in his consideration of the misappropriation of science and its potential for social injustice. But few authors could be considered more apropos to the particular discussion of technological classification than Philip K. Dick. Dick imagines the horrific social potential of misappropriated technological classification in several works: downloaded brain data are used to subject individuals to life-or-death trials (Dick 1966); electrophysiological prints are employed as classifiers in an oppressive eugenic society where lack of identity results in a lifetime of forced labor (Dick 1974); individuals are incarcerated for “precrimes” predicted by mysterious employees of a deterministic police state (Dick 1956); and a portable machine containing a battery of physiological measures is employed to test for humanity – failure results in classification as a rogue android, resulting in termination (Dick 1968). In any of the imaginary dystopias, an fMRI lie detector would seem, disturbingly, to be a perfect fit.

In his 1953 novella, The Variable Man, Dick presents an astute and more optimistic synecdoche of applied science in society: set in deterministic technocracy where wars are waged based on the quantitative predictions of a computer algorithm, the story describes a scenario eerily similar to the institutional application of a signal detection classification system. The titular “variable man,” a nineteenth-century tinker who has been transported to the future through an unexpected accident, serves as an unpredictable parameter whose unique circumstance causes him to disrupt the computer’s ability to make reliable classifications. Dick proposes that even the most sophisticated device may fail to predict with absolute accuracy the breadth of individual variability, as even a single outlier can undermine a system that is presumed – and required – to function perfectly. This simple yet viable suggestion, so easily overlooked by eager Galtonian and Lombrosian hypotheses of classification, is a crucial consideration in light of the broad variability observed in the human sciences – a variability that ought by all means be reflected in individual differences in brain activity. Dick’s accompanying vision of the future, not unlike our own present, is wrought with hyperspecialization that begets a dangerous technological dependency: individuals in different areas of even the same general field are completely unable to understand the details of one another’s professional expertise, and consequently, no individual is fully able to comprehend the workings of the very tools upon which they rely to make life-altering decisions. The social application of the thoroughly complex technology of fMRI could easily create a similar dilemma; indeed, even the relatively simple polygraph has been widely misapplied and misinterpreted during its controversial tenure as a putative classification tool (Cacioppo 2004). Dick’s variable man possesses a unique savant-like intuition for technological devices amidst the otherwise narrow-minded, hyper-specialized future society in which he finds himself. As such, he is contracted by the government, which is engaged in a prolonged intergalactic conflict, to build a powerful weapon of war. But along with his technical genius, he possesses a simple but important perspective on not just how to innovate technically but how to best deploy his inventions. In lieu of the weapon which he is contracted to build, he creates a device with which he alters the past, circumventing the war altogether. It is such broad thinking that an institution of power, which has the resources and authority to steer the application of our technologies and resources, would do well to adopt.

The demands of functional MRI are extensive, and its reduction to a simple “lie detector” would be a narrow application of its capabilities, further complicated by the risk of misappropriation in light of achieving a particular goal. Furthermore, if fMRI signal detection could be refined to the point of reliable application, it would be more valuable applied as a means to diagnose and treat medical and psychological disorders. Errors in such a diagnostic circumstance, though hopefully minimized, would at worst yield a null result. An fMRI lie detector, by contrast, through its sheer complexity, expense, and technological impressiveness, is at risk to be easily misinterpreted, oversimplified, and abused as an invalid means of social control.


  1. Abe N, Suzuki M, Mori E, Itoh M, Fujii T (2007) Deceiving others: distinct neural responses of the prefrontal cortex and amygdala in simple fabrication and deception with social interactions. J Cogn Neurosci 19(2):287–295
  2. Arnheim R (1954/1974) Art and visual perception: a psychology of the creative eye. University of California Press, Berkeley/Los Angeles
  3. Cacioppo JT (2003) Committee to review the scientific evidence on the polygraph. The polygraph and lie detection. National Academy Press, Washington, DC
  4. Damasio A (1995) Descartes’ error: emotion, reason, and the human brain. Harper Perennial, New York
  5. Dick PK (1966) The unteleported man/Lies, Inc. Ace Books, New York
  6. Dick PK (1968) Do androids dream of electric sheep? Doubleday, Garden City
  7. Dick PK (1974) Flow my tears, the Policeman said. Doubleday, Garden City
  8. Dick PK (1953) The variable man. Space Science Fiction 2(2)
  9. Dick PK (1956) The minority report. Fantastic Universe
  10. Galton F (1879) Composite portraits. J Anthropol Inst Great Brit Ireland 8:132–144
  11. Hakun JG, Ruparel K, Seelig D, Busch E, Loughead JW, Gur RC, Langleben DD (2009) Towards clinical trials of lie detection with fMRI. Soc Neurosci 4(6): 518–527
  12. Kozel FA, Johnson KA, Mu Q, Grenesko EL, Laken SJ, George MS (2005) Detection deception using functional magnetic resonance imaging. Biol Psychol 58:605–613
  13. Langleben DD, Loughead JW, Bilker WB, Ruparel K, Childress AR, Busch SI, Gur RC (2005) Telling truth from lie in individual subjects with fast event-related fMRI. Hum Brain Mapp 26(4):262–272
  14. Lashley KS (1950) In search of the engram. Soc Exp Biol Symp 4:454–482
  15. Loftus E (2003) Make-believe memories. Am Psychol 58:867–873
  16. Lombroso C (1876–1897) Criminal Man. Duke University Press, Durham
  17. Mohamed FB, Faro SH, Gordon NJ, Platek SM, Ahmad H, Williams JM (2006) Brain mapping of deception and truth telling about an ecologically valid situation: functional MR imaging and polygraph investigation-initial experience. Radiology 238(2):679–688
  18. Monteleone GT, Phan KL, Nusbaum HC, Fistgerald D, Hawkley LC, Irick JS, Fienberg SE, Cacioppo JT (2009) Detection of deception using functional magnetic resonance imaging: well above chance, though well below perfection. Social Neuroscience
  19. Nichols TE (2012) Multiple testing corrections, nonparametric methods, and random field theory. Neuroimage 62(2):811–815
  20. Orwell G (1949) Nineteen eighty-four. Secker and Warburg, London
  21. Phan KL, Magalhaes A, Ziemlewiz TJ, Fitzgerald D, Green C, Smith W (2005) Neurl correlates of telling lies: a functional magnetic resonance imaging study at 4 Tesla. Acad Radiol 12(2):164–172
  22. Porter R (2003) “Marginalized practices”, The Cambridge history of science: eighteenth-century science, The Cambridge history of science, vol 4. Cambridge University Press, pp. 495–497
  23. Raichle ME, Snyder AZ (2007) A default mode of brain function: a brief history of an evolving idea. Neuroimage 27(4):1083–1090
  24. Rumelhart DE, McClelland JL, the PDP Research Group (1986) Parallel distributed processing: explorations in the microstructure of cognition, vol 1 and 2. MIT Press, Cambridge, MA
  25. Violi A (2004) Lombroso e i Fantasmi Della Scienza. In: Forum Avanzato di Ricerca. Universitaria Multimediale (rivista elettronica)
  26. Gould SJ (1981) The mismeasure of man. W.W. Norton, New York
  27. Gray RT (2004) About face: German physiognomic thought from Lavater to Auschwitz. Wayne State University Press, Detroit

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.


Always on-time


100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655