This sample Neuroethics Research Paper is published for educational and informational purposes only. Free research papers are not written by our writers, they are contributed by users, so we are not responsible for the content of this free sample paper. If you want to buy a high quality paper on argumentative research paper topics at affordable price please use custom research paper writing services.
Neuroethics lies at the intersection of the clinical brain sciences, psychology, law, and moral philosophy. Although neuroethics overlaps to some extent with traditional issues in bioethics, it should not be categorized as a subdivision of bioethics. The connections between the brain and behavior explored by the neurosciences raise a distinctive set of ethical questions from those in traditional areas of bioethics. Neuroethics can be divided into the ethics of neuroscience and the neuroscience of ethics. The ﬁrst branch involves questions such as the ratio of risks to beneﬁts for patients taking psychotropic drugs or undergoing neurosurgical procedures and whether they have the mental capacity to give informed consent to these interventions. The second branch involves investigating the neurobiological basis of moral reasoning and decision-making through brain imaging. After providing a historical background of the ﬁeld and key deﬁnitions, this chapter discusses the three currently most debated topics in neuroethics: brain imaging to assess moral and legal responsibility; cognitive enhancement through psychopharmacology; and the use of imaging and other techniques to diagnose and treat prolonged disorders of consciousness.
Neuroethics is the study of the moral reasons for and against recording, monitoring, and intervening in the human brain. It is an interdisciplinary ﬁeld lying at the intersection of the clinical brain sciences of neurology, psychiatry, neurosurgery and radiology, cognitive and experimental psychology, law and moral philosophy. While advances in neuroscience have greatly expanded what we can do with the brain, neuroethics raises and discusses questions about what we should or should not do with different drugs, techniques, and devices that can alter the brain and mind in beneﬁting or harming people. Psychotropic drugs can be used not only to treat neurological and psychiatric disorders but also to enhance normal cognitive functions. Structural and functional brain imaging may be able to diagnose these disorders at an early stage and predict who will develop them before symptoms appear. Imaging might also elucidate the neurobiological underpinning of the mental capacities for behavior control and indicate whether persons are or are not morally or criminally responsible for their actions. The ability to record brain activity might also distinguish brain-injured patients who are vegetative from those who are minimally conscious and guide drug therapy or neurostimulating techniques that might enable them to recover many of their physical and cognitive functions and restore their independence. In addition, it may be possible for devices to allow patients who cannot communicate verbally or gesturally to express their wishes about continuing or discontinuing life-sustaining care.
In addition to diagnosing and predicting neuropsychiatric disorders and clarifying questions about responsibility, could imaging be used to detect a person’s thoughts? Would this violate the privacy of information about a person’s brain? Would the use of psychotropic drugs and neurostimulating techniques to enhance normal cognitive functions exacerbate social inequality by improving the welfare of the cognitively better off over that of the cognitively worse off? Can research into the effects of enhancement be justiﬁed given the scarcity of health resources? Would imaging conﬁrming that a brain-injured patient was conscious be valuable to the patient? How far should we go in developing and applying these measures of and interventions in the brain? Should there be limits? If so, then where should they be? And who should decide on formulating and implementing guidelines, policies, and laws to regulate these practices?
This chapter considers these general and more speciﬁc questions in the context of the three most prominent and contentious ethical topics in neuroscience: (1) Whether brain imaging can establish that persons can control their thought and behavior and be morally and criminally responsible for their actions; (2) Whether the beneﬁts of enhancing normal cognitive functions through psychopharmacology outweigh the risks; and (3) Whether imaging and other techniques to diagnose and treat patients with prolonged disorders of consciousness can beneﬁt them.
History And Definitions
Long before the United States Congress proclaimed the 1990s the “Decade of the Brain,” scientists and medical practitioners had been directly or indirectly intervening in the human brain and affecting patients in positive and negative ways. Many cultures practiced trepanning for thousands of years as a treatment for epilepsy, headache, and mental disorders. This is a primitive form of neurosurgery in which a hole is drilled or scraped into the skull. In the second century CE, Greek physician Galen recommended the use of electric eels to treat headache and facial pain. Later, in the late nineteenth century, Scottish neurologist David Ferrier and colleagues showed that direct electrical stimulation of the brain could change behavior. In the middle of the twentieth century, American-born Canadian neurosurgeon Wilder Penﬁeld probed a region of the exposed temporal lobe of some patients undergoing “awake” neurosurgery for epilepsy and elicited memories of patients’ past experiences. Not all interventions in the brain have been salutary or have advanced our understanding of how it enables cognitive, affective, and other functions. Perhaps the most notorious intervention was the frontal lobotomy performed on thousands of patients by American neurologist Walter Freeman in the 1940s and 1950s to treat mental disorders. This technique was based on the earlier prefrontal leucotomy developed by Portuguese neurologist, Egas Moniz, who received the Nobel Prize in Medicine or Physiology in 1949 for the “therapeutic value” of the procedure. The development of psychotropic drugs in the 1950s largely obviated the need for psychosurgery. Even with safer and more effective drug therapy, however, many patients do not respond to it and may be candidates for a more advanced form of psychosurgery. The current use of magnetic resonance imaging (MRI)-guided stereotactic neurosurgery has greatly reduced the risk of intracranial surgery and improved outcomes for patients undergoing different techniques. These include not only structural surgery to remove tumors or clip aneurysms but also functional neurosurgery such as deep brain stimulation (DBS) to modulate dysfunctional brain circuits in neurological and psychiatric disorders. All forms of neurosurgery, as well as the use of psychotropic drugs and brain imaging, involve the same fundamental ethical questions about whether the beneﬁts outweigh the risks, whether patients and research subjects can give informed consent to take the drugs or undergo the procedures, and what the obligations of clinicians and researchers are to their patients and research subjects.
In the 1990s, French neuroscientist Jean-Pierre Changeux used the term “neuroethics” at a symposium on biology and ethics at the Pasteur Institute in Paris (Damasio 2003). One can ﬁnd even earlier uses of this term in the neuroscience literature. But 2002 was the most eventful year for the burgeoning ﬁeld of neuroethics. Conferences were held in San Francisco and London to discuss ethical issues in neuroscience research and practice. One broad deﬁnition of “neuroethics” offered at the San Francisco conference was “the examination of what is right and wrong, good and bad, about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain” (Saﬁre 2002). The fundamental question of neuroethics is how different measures and manipulations of the brain can be justiﬁed with a view to whether they beneﬁt or harm patients and research subjects who undergo them.
Another signiﬁcant event for neuroethics in this same year was the publication of a landmark paper by neuroscientist and philosopher Adina Roskies (2002). She argued that neuroethics should not be categorized simply as a subcategory of bioethics because the connection between the brain and behavior and the ability to visualize and manipulate the brain raise a novel and distinct set of ethical questions. Neuroscience has advanced exponentially in the last 25 years with the advent and use of functional imaging in the form of fMRI and the use of techniques such as DBS, transcranial magnetic stimulation (TMS), and transcranial direct current stimulation (tDCS). Roskies drew a general distinction between the ethics of neuroscience and the neuroscience of ethics. The ﬁrst branch includes considerations of risk and beneﬁt to patients whose brains are scanned or altered with drugs, electrical stimulation, or other techniques as well as whether patients and research subjects have the mental capacity to consent to them. The second branch pertains mainly to examining the neurobiological basis of moral reasoning and decision-making. While acknowledging that “each of these can be pursued independently to a large extent,” Roskies noted that “perhaps most intriguing is to contemplate how progress in each will affect the other.” One example of overlap between the ethics of neuroscience and the neuroscience of ethics is when a patient with a neuropsychiatric disorder has to decide whether to accept or refuse an intervention in the brain to treat it. The capacity to weigh the beneﬁts and risks of the intervention is necessary for the patient to give informed consent to receive it. This in turn presupposes a sufﬁcient degree of function in brain regions mediating the capacity for rational and moral reasoning and decision-making. Paying particular attention to the neuroscience of ethics, Roskies asked whether increasing knowledge of the brain and how it regulates thought and behavior would change our views about moral and legal responsibility. In addition, she raised questions about how information from brain scans might be used for lie detection in legal settings and the more general implications for privacy and conﬁdentiality of information about our brains. At a deeper level, Roskies asked whether future developments in neuroscience would cause us to revise our deﬁnition of “normal” behavior and alter our understating of what makes us human. She concluded her article by arguing that neuroethics should not be conﬁned to specialists in neuroscience, philosophy, and law but should include public debate with broad social participation from all stakeholders in discussing the implications of reading and manipulating people’s brains.
Brain Imaging: Implications For Moral And Criminal Responsibility
The ability to visualize the neural correlates of reasoning and decision-making raises questions about how much of our thought and behavior are within our conscious control. These questions are more likely to be raised when imaging shows or suggests abnormalities in brain regions mediating the relevant mental capacities. The main types of structural bodily and brain imaging are computed tomography (CT) and magnetic resonance imaging (MRI). The main types of functional brain imaging are positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). CT scans provide three-dimensional x-rays of anatomical features of the brain. MRI measures the alignment of magnetic ﬁelds in the brain to generate images of brain structure and generally provides higher spatial resolution than CT. PET measures brain function in terms of levels of glucose metabolism. With the help of a radioactive tracer and its ability to detect gamma rays in the brain, PET generates images through its display of differences in metabolism in different brain regions. fMRI measures brain activity through changes in blood oxygenation.
Functional imaging techniques do not provide “snapshots” of and are not identical to actual events and processes occurring in a person’s brain. Instead, they are visualizations of statistical analyses based on a large number of images averaged over scans taken from many people. They are more accurately described as scientiﬁc constructs than as real-time measures of individual brain activity (Farah and Wolpe 2004). Functional imaging generates group data, and one cannot draw a direct inference from information about the brains of groups to information about the brains of individuals and how they affect the mental states that issue in their actions. Because of the inferential distance between images of brain activity and what actually occurs in the brain, scans are limited in explaining how the brain mediates the mental capacities necessary for moral reasoning and decision-making (Roskies 2013). This has important implications for claims about control of normal and abnormal behavior and whether or to what extent brain imaging can tell us that a person is responsible for his or her actions.
Although functional and structural scans may show correlations between certain features of the brain and behavior, correlation is not causation. Therefore, images of brain anatomy and processing cannot completely explain or determine behavior. The absence of a causal connection between what imaging is presumed to display and what actually occurs in the brain weakens the claim that brain processes completely determine our actions and undermine conscious free will and moral responsibility. Many claims about the brain determining choices and actions are based on anatomical and functional properties in localized regions of the brain. Yet, reasoning and decision-making are mediated by a distributed network of interacting neural pathways. Many brain regions are involved in a wide variety of functions, and this complicates any effort to connect a particular brain feature with a particular action of pattern of behavior. Even if there were a strong correlation between activity in a particular brain region and a person’s decision to act, this would not imply that the brain activity caused the decision. It is possible that the decision caused the change in brain activity. It cannot be known whether changes in blood ﬂow and metabolic demand in the brain shown by fMRI or PET caused the subject to form and execute an intention to act, or whether these mental acts caused the neural activity.
Claims that brain abnormalities undermine behavior control and justify excusing individuals from criminal responsibility for their behavior are equally suspect. Although functional imaging is more recent and technologically more sophisticated than structural imaging, the latter can be more helpful in contributing to an account of individual actions. By pressing on or inﬁltrating brain regions mediating the capacity to reason and inhibit impulses, a tumor in one of these regions detected by CT or MRI could impair this capacity and excuse the person from responsibility for a criminal act. As noted, though, reasoning and decision-making are mediated by distributed rather than localized neural networks. As part of the distributed model, redundancies in the brain could allow intact regions to bypass a damaged area and take over functions disabled by the tumor. This process might also be enabled through neuroplasticity, the brain’s ability to reorganize itself by forming new neural connections. So the presence of a structural or functional brain abnormality in a region associated with rationality and impulse control by itself would not mean that a person could not control his or her behavior. A brain abnormality displayed by neuroimaging will not establish that a person lacked the capacity to control his or her behavior, was impaired in this capacity, or had the capacity but failed to exercise it at the time of action. Without this determination, the presence of the abnormality would not be sufﬁcient to excuse a person from responsibility for a criminal act. Nevertheless, insofar as the presence of a brain abnormality suggests the possibility of brain-mediated impairment in moral judgment as reﬂected in the person’s behavior, this ﬁnding may be partial evidence of diminished control and a mitigating factor regarding responsibility for the act. An additional limitation of functional brain imaging is that scans of a defendant’s brain are taken some time after the commission of a crime. Because brain activity changes over time, scans cannot capture what was occurring in the defendant’s brain when he or she acted. Scans cannot provide a retrospective diagnosis of the defendant’s neural and mental states at the earlier time. Nor can the artiﬁcial setting of the scanner reproduce how factors in the defendant’s immediate environment such as social cues inﬂuenced his or brain and behavior when he or she acted.
Some cases appear to involve a causal connection between a brain abnormality and a criminal act or a series of such acts. A teacher in Virginia developed pedophilia and sexually assaulted his stepdaughter (Burns and Swerdlow 2003). Imaging revealed a large tumor pressing on his orbitofrontal cortex. This region is crucial for reasoning and impulse control. His altered behavior resolved when the tumor was removed. But the pedophilia returned with the growth of a second tumor of the same type in the same brain region. The history of the connection between the tumor and his actions conﬁrmed that it caused the change in his behavior and that he could not refrain from acting out his pedophilia despite knowing that his behavior was inappropriate. He met the cognitive criterion of criminal responsibility in recognizing reasons against his actions. Yet he failed to meet the volitional criterion because of his inability to control his impulses. This was enough to excuse him from criminal responsibility for his behavior. It is important to emphasize, however, that the presence of the tumor itself was not the excusing condition in this case. Rather, what excused the teacher from responsibility was the extent to which the tumor impaired his mental capacity to control his behavior. It is equally important to point out that most cases in which imaging is used to diagnose impaired behavior control will show only correlations between images of abnormal brain anatomy and activity and criminal behavior and thus stop short of establishing a causal connection between them that is necessary to excuse a person from responsibility.
More reﬁned forms of brain imaging may help to inform judgments of moral and criminal responsibility. In some instances, brain imaging may be able to clarify or conﬁrm assessments of cognitive and volitional control of actions when behavioral evidence alone is ambiguous or inconclusive. Brain imaging can supplement the law’s practice of holding people responsible, or excusing them from responsibility, on the basis of certain mental capacities. But it will not supplant this practice because responsibility is a normative concept that cannot be reduced to or explained away by empirical descriptions of brain activity. Even in cases where there are strong correlations between normal or abnormal brain features and a person’s actions, neuroimaging will be only one component of a holistic explanation of a person’s actions and whether he or she can control and be responsible for them. What matters for normative questions about behavior is not the brain itself and whether it is functional or dysfunctional but how it enables or disables the mental capacities necessary for moral and criminal responsibility.
Cognitive enhancement refers to interventions in the brain that improve normal levels of attention, concentration, and information processing in executive functions such as reasoning and decision-making. There are three main conceptions of cognitive enhancement: augmenting, diminishing, and optimizing. The ﬁrst conception considers interventions in the brain as enhancements when they improve some function by increasing its ability to do what it normally does. An enhancement is an intervention “designed to improve human form or function beyond what is necessary to restore or sustain good human health” (Juengst 1997). The second conception claims that some functions can be improved by diminishing the extent of what they do and their effects. For example, methylphenidate (Ritalin) may help some people to perform better on a certain cognitive task because the drug diminishes the content of their thought, enabling them to avoid distracting stimuli and focus on that task. The third conception takes enhancement to be any intervention that “aims at optimizing a speciﬁc class of information-processing functions: cognitive functions, physically realized by the human brain” (Metzinger and Hildt 2011). A broad, optimizing conception is probably the most consistent with people’s intuitions about cognitive enhancement. The goal of altering cognitive functions is not just to improve performance on a particular task but also to promote ﬂexible behavior and adaptability to the environment. This is more likely to occur when neural and mental processes are neither underactive nor overactive.
Noninvasive forms of brain stimulation such as TMS and tDCS have enhanced the performance of a small number of healthy research subjects on some cognitive tasks. But psychopharmacology is the most common means of cognitive enhancement. Some studies estimate that approximately 25 % of American secondary school students use psychostimulants such as dextroamphetamine (Adderall), methylphenidate, and the wakefulness-promoting drug modaﬁnil (Provigil) for enhancement. Earlier studies estimated that roughly 7 % of university students in the United States with no psychiatric or neurological disorder used these drugs. Roughly, 5 % of the working population in Germany uses psychotropic drugs to enhance their cognitive functions. In 2008, the journal Nature published the results of an informal survey of its readers. One-third of the 1,400 readers who responded said that they had used these drugs for off-label nontherapeutic reasons. The use of these drugs to do better on exams, write more successful grant applications, and improve work performance is likely to increase.
For those whose cognitive functions fall within a species-normal range, the drugs just mentioned may improve attention, concentration, and other functions associated with working memory. This involves the capacity to hold and process information for short periods for tasks such as decision-making. Experiments have shown that methylphenidate generally has moderate enhancing effects on these tasks by increasing levels of dopamine in the brain. Short-term studies indicate that people with a lower baseline of working memory on an absolute scale tend to beneﬁt more from this drug while those with a higher baseline tend to beneﬁt less. In some instances, the latter experience impairment in some cognitive functions. Children with attention deﬁcit hyperactivity disorder (ADHD) tend to do better academically when taking methylphenidate and other stimulant medications than those with the same disorder who do not take them. But there is no evidence that these drugs signiﬁcantly improve academic performance of children who do not have the disorder.
In a recent experiment using tDCS to test the learning and application of mathematical information, researchers found that stimulating an area of the subjects’ prefrontal cortex impaired learning new information but enhanced the application of what was learned. Stimulating an area of the parietal cortex had the opposite effect of enhancing learning while impairing the ability to apply the new information (Iuculano and Cohen Kadosh 2013). The upshot of this study and pharmacological studies is that enhancing some mental functions through drugs or electrical stimulation may come at the cost of other mental functions. There may be cognitive trade-offs that one would have to weigh in deciding on enhancement. The studies suggest that there are optimal levels of cognitive functions and limits to the extent to which they can be improved.
Modaﬁnil activates dopamine, which then activates norepinephrine and histamine in a process that blocks the hypothalamus from promoting sleep. This enables people taking the drug to be more alert and focused, even when they are sleep-deprived. Although it is prescribed for sleep disorders, it has been used by airline pilots with normal sleep-wake cycles to remain alert longer on transcontinental ﬂights. It can also enable students studying for exams or writing papers, or researchers writing grant applications, to forego sleep and allow more time for these activities. Like other psychostimulants, the effects of modaﬁnil are limited to certain cognitive functions, can vary among persons, and may not always be beneﬁcial.
All of these drugs have risks. Methylphenidate stimulates the central nervous system and inhibits uptake of dopamine in the brain. Increasing concentrations of this neurotransmitter in normally functioning brains could disrupt the reward system and make one susceptible to addictive behaviors such as gambling and hyper sexuality. Chronic use of methylphenidate or dextroamphetamine could increase the risk of hypertension, stroke, heart attack, insomnia, and psychosis. As with methylphenidate, occasional use of modaﬁnil may not have any untoward effects on those who use it to stay awake and remain alert and focused. But chronic use could be problematic. If one used modaﬁnil repeatedly to forego sleep and remain alert over long periods, then prolonged sleep deprivation could be a risk factor for metabolic and endocrine disorders such as obesity and diabetes, as well as cardiovascular disease. These conditions are more common among the chronically sleep-deprived. Alterations between sleep and attention are adaptations to the environment. If the brain senses that constant attention is a sign of constant demand, then sustained attention could overload it with information. Unnecessary wakefulness could have more adverse physiological effects than unnecessary sleep.
Research into the addictive potential of cognitive enhancement may pose an ethical problem regarding the determination of acceptable risk and the safety of the drugs. Participants in prospective studies of the drugs would be healthy subjects with no history of addiction. Those in the experimental arm of a study receiving regular doses of psych stimulants and experiencing heightened dopaminergic effects in their brains’ reward system would be exposed to a potentially addictive substance. Proving that chronic use of a drug was in fact addictive could cause previously healthy individuals to become addicted. Research subjects should have the right to participate in these studies if they are capable of giving informed consent to participate in them, and this would include being informed of any risks. But researchers’ duty of nonmaleﬁcence in protecting subjects from harm would raise the question of whether it would be consistent with this duty and permissible for them to expose research subjects to the risk of addictive behavior. Nevertheless, with careful design and monitoring of effects in subjects, studies that quantiﬁed the long-term risks associated with chronic use of cognition enhancing drugs would be both empirically and ethically justiﬁed.
This research would require resources from public health systems, however. Because health resources are scarce, some would argue that research into the effects of cognitive enhancement should receive lower funding priority than research into developing preventive and therapeutic treatments for neuropsychiatric disorders. This is partly because there is a more urgent need to generate scientiﬁc knowledge of the causes of and possible treatments for these disorders, as well as diseases like cancer, than to generate knowledge of the effects of drugs aimed at improving normal brain function. For this reason, some argue that research into cognitive enhancement should not be publicly funded at all. This raises the more fundamental question of what a just society should pay for in providing health resources to its citizens. A just society is one in which institutions ensure that all citizens have adequate access to health care for a decent minimal level of physical and cognitive functions. These are necessary for equal opportunity to achieve a minimally decent level of well-being over the course of people’s lives. Insofar as enhancement involves raising these functions above a decent minimum, it is questionable whether the state would be obligated to provide resources necessary for enhancement, even if it could afford to pay for them. Privately funded research and distribution of enhancing drugs would be an alternative. Depending on their cost, this could mean that the drugs would not be available to all who might want them. This would not necessarily exacerbate social inequality, though, given data showing that cognition enhancing drugs tend to improve functions more for the cognitively worse off than for the cognitively better off. There might not be any leveling down among the cognitively better off, whose capacity would remain relatively unchanged, and there would be some improvement in the cognitively worse off. This could weaken social reasons against enhancement.
In 2008, a group of neuroscientists and ethicists formulated a set of recommendations based on the presumption that mentally competent adults should be permitted to engage in cognitive enhancement (Greely et al 2008). These recommendations included an evidence-based approach to the evaluation of beneﬁts and risks and enforceable policies regarding the use of cognition enhancing drugs to support fairness, protect individuals from coercion, and minimize enhancement-related socioeconomic disparities. Questions about long-term risks of cognition enhancing drugs can only be answered after a sufﬁcient number of placebo-controlled studies of their effects have been completed. Even if the research could establish that risks of cognitive enhancement were within reasonable limits, some might question whether funding any research into improving neural and mental capacities could be justiﬁed. In the absence of public funding and sufﬁcient data on long-term risks, competent individuals should have the right to enhance. But such a right would come with the proviso that enhancing oneself did not result in harm to others.
Prolonged Disorders Of Consciousness
Following severe brain injury from trauma, hypoxia due to cardiac arrest, or infections, some patients progress through stages of coma, the vegetative state (VS) and the minimally conscious state (MCS), and gradually recover a state of full awareness (UK Royal College of Physicians 2013). Neurologists divide consciousness into arousal and awareness. Coma is characterized by the lack of arousal or awareness in the form of complete unresponsiveness. Some comatose patients regain full awareness, usually within 2–4 weeks of a brain injury. Others eventually lose all brain functions and die. Still other patients progress from a coma to a vegetative state, where they have sleep-wake cycles but are not aware of self or surroundings. Many neurologists consider a persistent VS to become permanent 3 months after an anoxic injury or 12 months after a traumatic injury. Permanent VS patients have no chance of regaining any capacity for consciousness. Those who progress from the VS to the MCS have varying degrees of awareness. The MCS is characterized by profound unresponsiveness despite intermittent evidence of awareness. Interaction with and behavioral responses to others are inconsistent but reproducible. Emergence from the MCS consists in the recovery of reliable and consistent responses. Why some individuals remain in a VS, progress to an MCS or emerge from it may be explained by differences in integration and activation of neurons in the thalamus (the brain’s main information relay station), projections from the thalamus to the cerebral cortex and connections between different cortical regions. There is little or no integrated thalamic-cortical activity in the VS but active and under-sustained activity in the MCS. The greater the integration of these brain features, the greater the probability of recovery of a high level of consciousness, cognitive and physical functions, and the ability live with some degree of independence. The type of brain injury can inﬂuence the probability of meaningful recovery. There tends to be a greater degree of thalamic-cortical integration in patients with traumatic brain injury than in those with anoxic injury, and this may be an indicator of potential for improvement in neurological status. Even here, though, there is considerable variation among patients’ brains and the capacity to recover, which can make it difﬁcult to predict the outcome for each.
The VS and MCS need to be distinguished from locked-in syndrome, which is not a disorder of consciousness. Rather, it is a state of full consciousness but with almost total paralysis. Eye or eyelid movements are often the only voluntary actions locked-in patients can perform and their only means of communication. Some patients lack even this ability and are completely locked in. These patients have intact integrated cortical function but damage to a region of the brainstem, usually from a hemorrhagic stroke, which disrupts connections between the cortex and the rest of the body.
Many patients diagnosed as vegetative are in fact minimally conscious. The rate of misdiagnosis of the MCS has been estimated to be as high as 40 %. A mistaken diagnosis of a patient as permanently vegetative rather than minimally conscious could lead to the withdrawal of life support in the form of artiﬁcial hydration and nutrition. By causing the patient’s death, this action would preclude interventions that might restore a greater degree of consciousness and promote recovery of at least some physical and cognitive functions. These interventions might include drugs to activate certain neurotransmitters or deep brain stimulation to activate the thalamus and its projections to the cerebral cortex, which are critical for awareness. While there have been some cases of MCS patients who have recovered some degree of functional independence, there have been no cases of patients who have recovered these functions to preinjury levels. The sedative-hypnotic drug zolpidem (Ambien) has increased the level of awareness in a small number of MCS patients. But its overall beneﬁt for this group remains questionable.
Unlike vegetative patients, where the brain regions associated with pain are no longer active, the brains of MCS patients show activation in response to painful stimuli in functional neuroimaging studies (Fernandez-Espejo and Owen 2013). Insofar as they can feel pain and experience continued mental distress from it, patients living in a minimally conscious state could be worse off than patients living in a vegetative state with no awareness and no capacity for pain. In light of the physical and cognitive disability of
most MCS patients and their capacity to feel pain and suffer, consciousness may not have value and could have disvalue for these patients. Their pain and suffering may increase if they are unable to report their experience to physicians treating them. “If such patients suffer, they can be harmed by continuing treatment; there may be stronger reasons in terms of nonmaleﬁcence and the best interests of the patients to allow them to die. This must be balanced against the possibility of having positive experiences and the greater uncertainty about prognosis for such patients compared with those in a permanent vegetative state” (Wilkinson et al. 2009). Analgesia can be given to prevent or relieve pain in cases where it is known or suspected. But this by itself would not be an indicator of signiﬁcant recovery from their brain injury.
Because of their neurologically compromised state, it is difﬁcult to know the wishes of minimally conscious patients. Brain–computer interfaces (BCIs) and other devices may allow some locked in patients to communicate their wishes about treatment. They are fully conscious and may be able to process the semantic information necessary to communicate when they are unable to do this verbally or gesturally. Most if not all MCS patients would not be able to manipulate a BCI and may not have the necessary cognitive and emotional capacity to fully consider the probable consequences of continuing or discontinuing life-sustaining treatment. In this regard, they would not be able to give informed consent to either of these actions. A surrogate might give proxy consent regarding treatment. Without a clear verbal or written directive from the patient, however, there would be uncertainty about whether the patient would want to initiate, continue, or withdraw life-sustaining and potentially restorative interventions. One neuroscientist expresses the problem thus: “It is not clear if the key issue is ‘consciousness,’ or the clinical experience of these patients per long-term recovery of ‘meaningful’ life. This conundrum stresses the preinjury intention of the patient to live in a VS or MCS. Some might opt for any life, but most would not enjoy the prison of VS or MCS. Without knowing ante-injury, it is hard to make the right clinical call” (Knight 2008).
Patients with more integrated brain structure and function indicating a higher probability of recovery could be identiﬁed shortly after brain injury by neuroimaging. Aggressive treatment could then be given to them which might increase the likelihood of a signiﬁcant degree of restoration of cognitive and physical functions and some degree of independence for at least some of these patients. To date, though, few treatments have been effective in bringing about meaningful recovery for most minimally conscious patients. As a matter of justice, given the severity of their injury and the judgment that they are worse off than other patients with other disorders, priority in medical research should be given to conducting a sufﬁcient number or clinical trials and developing treatments for these patients that would restore cognitive and physical functions for many or most of them. The claim that they should receive this priority would be strengthened by the large number of patients in this group. It has been estimated that there are between 250,000 and 300,000 patients in the United States alone languishing in nursing homes with disordered states of consciousness. The evidential basis of neuroscience consists mainly of probabilistic group data from brain imaging and other techniques and is not clear-cut in individual cases. A particular patient in a minimally conscious state may have more integrated cortical function than other patients in the same state, which may indicate a better prognosis for him or her. This could warrant invoking a precautionary principle against withdrawing life-support from such a patient, since such an act would automatically preclude any chance of recovery. But given that resources for medical research are limited, decisions about allocating them must be done with a view to how they will affect groups rather than individuals. And any reasonable concept of priority in allocating resources for those with prolonged disorders of consciousness must be conditional rather than absolute. The cost and probability of efﬁcacy in research and development of drugs and techniques to improve the prospect of meaningful recovery in minimally conscious patients would have to be weighed against the therapeutic value of other interventions for other medical conditions. These are open questions that need to be addressed at both medical professional and health policy levels.
There are global ethical dimensions to neuroethics. Because neurological and psychiatric disorders constitute a signiﬁcant percentage of the world’s burden of disease, ethical issues such as the potential beneﬁts and risks to patients undergoing interventions in the brain, as well as the obligations of medical professionals providing them, apply to a large number of people. These issues are just as pertinent to the developing world as they are to the developed world because brain disorders do not discriminate on the basis of geography or economics. Yet they raise questions about justice, since not all people with these disorders have equal access to proven or potentially therapeutic interventions. The neurological basis of diseases of the brain and mind is not a function of politics or culture. But political and cultural institutions and how they allocate limited health resources are critical to whether patients have access to treatments for these diseases. In addition, cultural factors may inﬂuence the classiﬁcation of altering the brain and mind as therapy or enhancement. For example, the use of psych stimulants such as peyote and mescaline by Native Americans and certain tribes in Mexico may be considered standard treatment for them but enhancement for other groups. Another social justice issue is whether unequal access to neuroenhancers and the presumed cognitive beneﬁt they provide to a privileged few who can afford them give an unfair advantage to some over others in education and employment, resulting in unequal levels of well-being. All of these issues underscore the important role of social, cultural, and political frameworks in shaping discussion of the ethical issues about different interventions in the brain.
This chapter has focused on the three most debated topics in neuroethics. There are other important issues in both the ethics of neuroscience and neuroscience of ethics. These include questions about what radiologists should do with incidental ﬁndings from brain imaging of patients and research subjects. While clinically signiﬁcant ﬁndings would require referral to a specialist, it is not always clear how this information should be presented to subjects. Brain imaging may advance to the point where it can reveal the content of one’s thoughts. Currently, this is possible in only a very crude sense. But if imaging could effectively read the mind, then this would raise concern about brain privacy and the need for protection of information lying inside the skull. Neural prosthetics such as deep brain stimulation and BCIs raise questions about personal identity and whether these devices rather than the conscious mind control the behavior of those in whom they are implanted. In the neuroscience of ethics, imaging could lead to a better understanding of the neural basis of our moral judgments and why we have certain intuitions about right and wrong actions. All of these recordings of and interventions in the brain require continued reﬂection and discussion of the positive and negative ways in which they affect us now and how they might affect us in the future.
- Burns, J., & Swerdlow, R. (2003). Right orbitofrontal tumor with pedophilia symptom and constructional apraxia sign. Archives of Neurology, 62, 437–440. doi:10.1001/archneur.60.3.437.
- Damasio, A. (2003). Looking for Spinoza: Joy, sorrow and the feeling brain. Orlando: Harcourt.
- Glannon, W. (2007). Bioethics and the brain. New York: Oxford University Press.
- Glannon, W. (2011). Brain, body and mind: Neuroethics with a human face. New York: Oxford University Press. Levy, N. (2007). Neuroethics: Challenges for the 21st century. Cambridge, UK: Cambridge University Press.
- Farah, M., & Wolpe, P. (2004). Monitoring and manipulating brain function: New neuroscience technologies and their ethical implications. Hastings Center Report, 34(3), 35–45. doi:10.2307/3528418.
- Fernandez-Espejo, D., & Owen, A. (2013). Detecting awareness after severe brain injury. Nature Reviews Neuroscience, 14, 801–809. doi:10.1038/nrn3608.
- Greely, H., Sahakian, B., Harris, J., Kessler, R., Gazzaniga, M., Campbell, P., & Farah, M. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature, 450, 702–705. doi:10.1038/456702a.
- Iuculano, T., & Cohen, K. R. (2013). The mental cost of cognitive enhancement. Journal of Neuroscience, 33,4482–4486. doi:10.1523/JNEUROSCI.4927-12.2013.
- Juengst, E. (1997). What does ‘enhancement’ mean? In E. Parens (Ed.), Enhancing human traits: Ethical and social implications (pp. 29–47). Washington, DC: Georgetown University Press.
- Knight, R. (2008). Consciousness unchained: Ethical issues and the vegetative and minimally conscious state. AJOB-Neuroscience, 8(9), 1–2. doi:10.1080/15265160802414524.
- Metzinger, T., & Hildt, E. (2011). Cognitive enhancement. In J. Illes & B. Sahakian (Eds.), Oxford handbook of neuroethics (pp. 245–264). Oxford: Oxford University Press.
- Roskies, A. (2002). Neuroethics for the new millennium. Neuron, 35, 21–23. doi:10.1016/S0896-6273(02)007638.
- Roskies, A. (2013). Brain imaging techniques. In A. Roskies & S. Morse (Eds.), A primer of criminal law and neuroscience (pp. 37–74). New York: Oxford University Press.
- Royal College of Physicians (UK). (2013). Prolonged disorders of consciousness: National clinical guideline. London, UK. https://www.rcplondon.ac.uk/ prolonged_disorders_of_consciousness_national_clini cal_guidelines
- Saﬁre, W. (2002). Visions for a new ﬁeld of neuroethics. In S. Marcus (Ed.), Neuroethics: Mapping the ﬁeld (pp. 3–9). New York: Dana Press.
- Wilkinson, D., Kahane, G., Horne, M., & Savulescu, J. (2009). Functional neuroimaging and withdrawal of life-sustaining treatment from vegetative patients. Journal of Medical Ethics, 35, 508–511. doi:10.1136/ jme.2008.029165.
- Farah, M. (Ed.). (2010). Neuroethics: An introduction with readings. Cambridge, MA: MIT Press.
Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.