This sample Censorship Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of research paper topics, and browse research paper examples.
The communication of information is central to many issues in global bioethics and so the justiﬁcation for censorship is a key concern. This research paper describes prudential and epistemic frameworks for the justiﬁcation of censorship and explores their utility in light of prominent controversies in global bioethics.
Censorship limits the freedom to communicate ideas, information, or opinions, typically on the grounds that, in so doing, some harm will be prevented. In this research paper, the justiﬁcation for censorship is discussed with reference to four case studies of global signiﬁcance in Bioethics: the redaction of potentially harmful information in scientiﬁc publications; the questioning of health claims made by proponents of Complementary and Alternative Medicine (CAM); the veto of direct to consumer advertising of prescription medicines (DTCA); and the withholding of medical information by doctors to their patients under the doctrine of “therapeutic privilege.”
In each instance, censorship has either been proposed or enacted somewhere on the globe and generated great controversy. Competing claims are evaluated with a theoretical framework that advances both prudential and epistemic benchmarks for limiting communication. It is concluded that wide debate and deliberation guided by both standards is key to achieving justiﬁed decisions about the utility and warrant for censorship.
Examples Of Censorship In Global Bioethics
In 2011, Dutch researchers developed a strain of the A/H5N1 “Avian Flu” virus that could undergo airborne transmission between ferrets (Herfst et al. 2012). Their ﬁnding was signiﬁcant in that airborne transmission of the modiﬁed virus was also likely to be possible between humans. The virus, which kills up to 60 % of people who contract it, had previously only infected people in contact with the saliva, blood, or faeces of infected birds, usually chickens. Until the Dutch research, it was thought that human-to-human transmission of the A/H5N1 virus posed little threat.
The researchers submitted their ﬁndings to the prestigious journal Science. But the journal’s editors were concerned that, should the methods for producing the virus fall into the hands of terrorists or other malevolent individuals, any beneﬁts of publication might be negated by the risks to public health. So the editors forwarded the manuscript to the US National Science Advisory Board for Biosecurity (NSABB). The NSABB promptly requested that the researchers redact the methods section from their paper.
In response, and amid global controversy, the researchers imposed a voluntary moratorium on their work so the issue could be debated. In June 2012, more than a year later, and after a debate that generated intense media interest, the researchers lifted their moratorium and the paper was published in its entirety.
Dubious Health Claims
There was little dispute that the A/H5N1 research was sound. Censorship has, however, been central to recent debates about health information with doubtful scientiﬁc grounds. In Australia, in November 2013, the New South Wales Parliament launched an inquiry into “The Promotion of False or Misleading Health-Related Information or Practices.” The inquiry, ongoing at the time of writing, is focused on the regulatory power of the New South Wales Health Care Complaints Commission (HCCC) over health claims made by people and organisations not recognized as health service providers.
The inquiry follows a public warning issued by the HCCC about claims made by the Australian Vaccination-Skeptics Network. The claims questioned the safety and effectiveness of a range of vaccinations, including Diphtheria-Tetanus Pertussis, offered to children under the National Immunization Program Schedule. The warning advises the claims are “misleading, misrepresented and… likely to detrimentally affect the clinical management or care of its readers” (Health Care Complaints Commission 2014).
The Chairperson has been careful to state the inquiry would not focus on the kinds of alternative health remedies that are used by many Australians. But a prominent lobby group Friends of Science in Medicine suggested in its submission that university courses in health disciplines with unproven scientiﬁc foundations such as CAM should be subject to censorship. The group singled out Chiropractic, Reiki, and Homeopathy for scrutiny on the basis that health-related university courses should be grounded in solid science.
Direct To Consumer Advertising Of Prescription Medicines (DTCA)
DTCA is another practice subject to censure due to concerns about information veracity. DTCA is permitted in the US and New Zealand but outlawed in all other countries. Regulation reﬂects concern that advertising increases inappropriate drug use with its attendant risks of toxicity and adverse reactions. The US has developed a complex framework to address these concerns that involves statutory regulation and oversight by the Food and Drug Administration (FDA). The FDA also conducts its own research into how drug advertising can mislead via its Ofﬁce of Prescription Drug Promotion (OPDP).
Several jurisdictions have challenged vetoes on DTCA on the basis that they impermissibly suppress free speech. In 2005, advertising agency CanWest mounted a legal challenge to Canada’s ban on DTCA, a process that ultimately foundered for lack of funds before any judicial ﬁnding. A number of unsuccessful attempts have also been launched to reverse the European Union’s ban on pharmaceutical advertising.
Censorship is also at issue in the more personal realm of the doctor patient relationship. It has been a traditional “therapeutic privilege” of doctors to withhold information should they deem it harmful to patients. A common practice was to omit reference to certain risks of treatment. One surgeon defended his failure to disclose the risk of vocal cord paralysis to a patient considering thyroid surgery, saying, “were I to point out all the complications – or even half the complications – many people would refuse to have anything done, and therefore would be much worse off” (Buchanan 1978). The rise of respect for patient autonomy, and the view that competent patients make a valuable contribution to any assessment of their best interests, has seen this practice criticized in both the medical ethics literature and the courts.
Each of the cited examples highlights a tension between the value of free and open communication of information and the potential warrant for its suppression. To adjudicate the relevant competing arguments it is ﬁrst necessary to provide some theoretical scaffolding.
Prudential And Epistemic Warrant For Censorship
A Prudential Standard For Censorship
In the United States, freedom of speech is protected under the First Amendment to the Constitution which states that:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.
It is possible to advance two broad and related principles upon which an interest in free speech might be legitimately overridden. The ﬁrst justiﬁcation is prudential. Speakers might permissibly be silenced if their message is likely to cause direct and obvious harm. This justiﬁcation appeals to the Harm Principle of nineteenth century philosopher and economist John Stuart Mill. In On Liberty Mill said that:
[T]he only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufﬁcient warrant (Mill 2012, p. 11).
According to the Harm Principle, it would be wrong to restrain a person from broadcasting or receiving a communication “because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right” (Mill 2012, p. 11). To adopt this motivation would be to decide what is good for another, an exercise that Mill argued was bound to fail. For Mill, individuals are best placed to arbitrate on what is good for them and any third party charged with the same task is likely to get it wrong more often than not. As Mill put it, “The only freedom which deserves the name, is that of pursuing our own good in our own way” (Mill 2012, p. 14). On Mill’s utilitarian calculus proof of the moral wrong of third party intervention is that overall utility suffers.
According to the Harm Principle, individual liberty is justiﬁably limited only to prevent harm to others. In his decision in Schenck v. United States, American jurist Oliver Wendell Holmes Jr. offered an example of censorship consistent with Mill’s rationale:
The most stringent protection of free speech would not protect a man in falsely shouting ﬁre in a theatre and causing a panic…. The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent.
A contemporary example of Mill’s philosophy in practice is the outlawing of holocaust denial in a number of European countries on the grounds that it may incite hatred and violence towards Jews.
This prudential standard for censorship must grapple with pluralism about how the good is deﬁned. Subjective accounts of the good hold, consistent with Mill, that it is best achieved through fulﬁllment of an individual’s informed and rational preferences . But if “harm to others” is deﬁned on this subjective standard, speech may be limited for what some consider only minor or idiosyncratic offence. It has been argued, for example, that “hate speech” legislation crosses this boundary when it outlaws insults directed at another’s race or religion.
Objective accounts hold that the good can be determined without reference to the particular values or preferences of individuals. Health and knowledge, for example, are proposed as objective goods. But censorship based on an objective standard of “harm to others” invites claims of unjustiﬁed paternalism; limiting people’s freedom for their own good despite their having the capacity to decide what is in their own interests. For example, some object to censorship of violent or sexual content in ﬁlms on the grounds that others are simply not qualiﬁed to judge what is good for them.
An Epistemic Standard For Censorship
The second justiﬁcation for overriding an interest in free speech is epistemic. On this view, a speech act might be legitimately silenced if it caused the listener to hold a belief with questionable epistemic grounds, for example, one that was unjustiﬁed, false, or both. Alvin Goldman has argued that people have “veritistic” interests, that is, interests in believing truthful information. Because false or misleading speech works against the veritistic interests of the audience their liberty to receive it might be legitimately constrained. Goldman calls this “epistemic paternalism”:
I shall think of communication controllers as exercising epistemic paternalism whenever they interpose their own judgment rather than allow the audience to exercise theirs (all with an eye to the audience’s epistemic prospects) (Goldman 1991).
Goldman argues that epistemic paternalism already operates in schools that limit the teaching of creationism, in courts when character evidence is ruled inadmissible on the basis that it might bias juries, and in commerce when statutory provisions prohibit false or misleading information in the advertising of goods and services. But Thomas Scanlon points out that charging governments with vetting information for veracity raises at least two problems. First, there is a potential slippery slope to censorship for political rather than epistemic purposes. Second, there is the difﬁculty of ensuring that the people tasked with vetting information are competent to do so (Scanlon 1972).
Epistemic paternalism also conﬂicts with the school of thought that only through the fullest airing of views can listeners grasp the truth. Mill, for example, argued that people are worse off for silencing opinion because, “If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a beneﬁt, the clearer perception and livelier impression of truth, produced by its collision with error” (Mill 2012, p. 18). This “marketplace of ideas” theory was also championed by Oliver Wendell Holmes Jr. when he stated in Abrams v. United States that, “The ultimate good desired is better reached by free trade in ideas – that the best test of truth is the power of the thought to get itself accepted in the competition of the market.”
While both the prudential and epistemic benchmarks face objections each is central to a discussion of the permissibility of censorship in global bioethics.
A Censorship Framework For Global Bioethics
Truth was not at issue in the A/H5N1 paper submitted to Science because peer review established the methods were rigorous. Justiﬁcation for censorship turned on the potential harms of publication and could, therefore, appeal to Mill’s Harm Principle. At ﬁrst glance, the possibility that terrorists could use the information to modify then spread the virus to kill people supports censorship. But opposing arguments suggest that, on balance, the harms of censorship could be even greater.
For example, redacting the methods could prevent other scientists from making vaccines and treatments. As a result, any subsequent pandemic could be even more lethal than a terrorist attack. There is also a view that, irrespective of a censor’s best intentions, nature is far more likely to wreak havoc than a terrorist. Virus mutations in the natural environment can be as or more lethal than anything concocted in a laboratory. As virologist Dr Robert Webster put it, “Nature is the greatest bioterrorist…. The greater risk is what Mother Nature is doing out there every day” (Yang 2013). Moreover, the modiﬁed A/H5N1 virus had a lower mortality than the original version, plausibly lessening its overall threat.
It has also been suggested that most competent virologists could piece together the missing methods by “reverse engineering” the results data. Given that an aspiring terrorist would likely have signiﬁcant expertise it is improbable that censoring only the methods section would limit access to the modiﬁed virus. And, concerns about civil liberties notwithstanding, greater surveillance measures in many nations arguably diminish the overall risk of terrorism anyway. A more general argument against science censorship is that the beneﬁts of research are often unclear at the outset. Publication can deliver unanticipated windfalls as the information is disseminated and used by other investigators.
But in support of censorship, it was mooted that scientists developing vaccines could receive the necessary data by private channels, obviating the need to bring methods into the public realm. As noted, however, arguments favoring censorship were ultimately rejected and the paper was published in its entirety.
Global debate on the A/H5N1 data publication focused heavily on harmful consequences, showing how ﬁrmly embedded the utilitarian calculus is in public morality. The adjudication of the nature, probability, and severity of those harms was, in addition, exceedingly complex. It was, however, only through debate, deliberation, and the wide public airing of ideas that a determination on the permissibility of censorship was reached.
Dubious Health Claims
The ease with which health information can be disseminated over the Internet and the rising popularity of CAM raise questions about the justiﬁcation for censoring health claims with questionable scientiﬁc grounds. Health makes a strong claim to appear on any list of so-called “objective” goods. John Rawls includes it on his list of “primary goods” which are “things which it is supposed a rational man wants whatever else he wants” (Rawls 1971, p. 79). It is uncontroversial, therefore, that should someone delay medical treatment or take an ineffective remedy based on misleading health information serious harms may ensue. And so a harm-based justiﬁcation could plausibly be invoked to censor the information.
There are also good epistemic grounds to limit this type of communication. A cornerstone of contemporary medical ethics and law is that people give autonomous consent to treatment. Autonomy, or self-determination, requires that patients be in possession of material facts. Material facts are those, according to the judgment in the prominent Australian legal case Rogers v Whitaker, “that a reasonable person in the patient’s position would be likely to attach signiﬁcance to.” Health facts are material to patients and so misleading health information works against autonomous consent.
Should a CAM practitioner make an unproven health claim, then, a case for censorship might be made through appeal to both prudential and epistemic benchmarks. But an important retort appeals to an “epistemic pluralism” that legitimizes a variety of accounts of knowledge. Some argue, for example, that establishment medicine itself has, and still does, use treatments that remain unsupported by high grade evidence. Indeed the very term “evidence-based” medicine implies the existence of medicine that lacks evidence. By contrast, the idea of “evidence-based physics” seems tautologous given the central place of evidence in that discipline.
On this view, establishment medicine is given a questionable epistemic privilege over less mainstream disciplines. While the practices of CAM might seem scientiﬁcally doubtful now perhaps, like many medical therapies, they will come to be supported by evidence over time? Perhaps, also, people ascribe worth to knowledge in virtue of their idiosyncratic values. If the philosophy and proposed mechanisms of Reiki, for example, accord with an individual’s values is that not reason to permit its free use? Many religions offer guidance to adherents without fear of censorship due to inadequate evidence. It would seem counterintuitive, for example, to prevent a Buddhist from accessing spiritual guidance on the grounds there is not enough evidence for reincarnation.
Moreover, many therapies under the CAM umbrella will offer beneﬁt via the powerful placebo response. This is especially true of drugs that act on the central nervous system like pain killers or antidepressants. Should CAM generate a similar effect this might also support their free promulgation. Perhaps, too, doubts about the claims made by CAM are best aired in the marketplace of ideas. Accepting this theory, the community will more likely reach truthful conclusions about the effects of CAM with free and open transmission of information, not censorship.
There are a number of responses to these claims. First, rigorous evidence-testing frameworks like the Cochrane Collaboration have led establishment medicine to jettison many practices that did not withstand scrutiny. For example, a number of reviews failed to ﬁnd beneﬁt from steam inhalation in croup, a common upper airway infection in children. Despite steam being integral to the management of croup since the nineteenth century, many centers no longer recommend it.
Establishment medicine adopts a hierarchy of evidence from meta-analyses of controlled trials down to consensus expert agreement. Given the willingness of orthodox medicine to reject treatments that do not pass evidential muster, ought not CAM to do the same? Indeed, there are major efforts within CAM to develop a similar evidence base. So why not require CAM practitioners to comply with the same evidential standards as establishment medicine, under the auspices of regulatory bodies like the FDA?
Second, if CAM treatments work primarily via the placebo response there is a strong case such information is material for people considering them. Respect for autonomy suggests reliance on placebo responses, and the absence of data for any primary physiological mechanism, ought to be made clear. Some might protest that such transparency would negate the placebo response but placebo responses can still occur even when explained to patients (Kaptchuk et al. 2010). Moreover, information about the placebo component of responses to active drug therapies is widely available, yet the response still occurs.
A third concern is that, while the theory of opening the market to ideas has entered the vernacular, it is not at all clear the concept withstands scrutiny. Alvin Goldman and economist James Cox have argued the theory lacks force on a number of grounds (Goldman and Cox 1996). They point out that no detailed argument has been adduced as to how free market mechanisms would actually advance truth. They observe that it is often the most inﬂuential opinions that gain acceptance in the marketplace, yet it is not obvious that such inﬂuence attaches to truthful claims. Moreover, they argue, the commercial market is not really “free” but rests, in fact, on a substantial infrastructure of regulation, for example, tort law and property rights. Must one have a speciﬁc degree of freedom in the ideas market to promote truth and, if so, how much?
They also argue against the notion that truth emerges from markets analogous to the way adaptive tendencies prevail in Darwinian evolution by “survival of the ﬁttest.” Market competition rewards efﬁcient production and so, if truth happens not to be efﬁcient it may not emerge. And given the importance of consumer preferences the authors opine that “If consumers do not value truth very much… perfect competition will efﬁciently ensure that they don’t get very much truth as compared with other goods” (Goldman and Cox 1996, p. 18).
Indeed, other psychological theories support the idea that some propositions gain acceptance not because they are true, but because they are entertained ﬁrst. Harvard University psychologist Daniel Gilbert has argued that propositions undergo a default acceptance and are only rejected after counter arguments are developed and considered by the actor (Gilbert 1991). Rejection, however, requires a motivation or ability that is frequently lacking. The path of least cognitive resistance is often acceptance of the proposition.
A variety of cognitive biases are co-conspirators in this process. “Mere exposure” suggests that things encountered more frequently engender a positive attitude in the viewer. So, arguments may become more palatable simply in virtue of being aired repeatedly. The “availability heuristic” refers to our tendency to estimate the frequency of an event by how easily we can access instances of similar events in memory. Availability is biased when we easily access instances that are especially graphic or salient. Sensational press reports of solitary claims that, for example, vaccinations cause autism, makes it easier to recall those claims. People may, as a result, believe the risk to be more widely accepted than it really is.
Epistemic pluralism must also contend with issues at the intersection of the private and public realms. Universities are frequently subsidized to some extent by public monies, and so their course offerings are not merely services offered by a private business. Taxpayers expect due diligence and oversight of government expenditure and so there is plausibly a lower threshold for censoring publicly funded health courses with a dubious evidential base.
Debate over potential censorship of questionable health claims is ongoing but both prudential and epistemic considerations will play a key role in its outcome.
Direct To Consumer Advertising Of Prescription Medicines (DTCA)
Use of prescription drugs requires a doctor’s authorization in the form of a written “prescription.” Prescription drugs attain that status because they are potentially habit forming, are associated with signiﬁcant side effects or toxicity, or require monitoring by a health professional to ensure safety. DTCA bans reﬂect the view that, because advertisers are motivated to persuade rather than inform, ads will be biased, misleading, imbalanced, or false. The fear is that consumers may come to want inappropriate drugs or have unrealistic expectations of beneﬁt and safety, undermining their autonomy and exposing them to risk.
Does DTCA actually harm consumers? While there are limited data there are reasons for concern. First, it is clear that DTCA generates a signiﬁcant return on investment for drug companies by increasing the use of advertised drugs. Yet, an FDA survey of doctors found 75 % thought DTCA caused patients to think “drugs work better than they actually do” (Aikin et al. 2004, p. 69) and 69 % said DTCA caused patients to want advertised drugs over any others, however effective. Moreover, annual adverse drug events reported to the FDA more than doubled, to 482,000, in the decade after DTCA was legalized in 1997 (Food and Drug Administration 2007). These ﬁgures suggest signiﬁcant adverse outcomes ﬂowing from DTCA.
DTCA proponents mount a number of responses to these concerns (Ventola 2011). They claim that by creating awareness of diseases and their treatment DTCA encourages more people to visit the doctor. Earlier diagnosis and treatment lead to better health outcomes and lowered healthcare costs. DTCA has been argued to increase compliance with medications through a “reminder” effect. It is also argued to reduce stigma associated with some disorders, for example, mental illness as well as driving down drug costs and thus widening access. Despite these claims, however, a recent meta-analysis found no evidence that DTCA led to improved public health (Gilbody et al. 2005).
Adjudicating the competing claims is an invidious task. But given only the US and New Zealand permit DTCA, there is reason to think many countries believe the harms outweigh beneﬁts and censor DTCA on prudential grounds.
Might there also be epistemic warrant to censor DTCA? It is central to this question to determine if DTCA misleads. In the US, advertising of prescription drugs is subject to a raft of statutes and regulations that caution against false or misleading claims and mandate balanced disclosure of the risks and beneﬁts of the advertised drug. These strictures include the Food, Drug, and Cosmetic Act and the US Code of Federal Regulations. The FDA also has an entire unit, the Ofﬁce of Prescription Drug Promotion (OPDP), dedicated to ensuring that DTCA complies with these requirements. The OPDP also conducts its own innovative research into some of the more subtle effects of advertising.
Yet, as the FDA study cited earlier found (Aikin et al. 2004), many DTCA viewers subsequently approach their doctors with unrealistic expectations of an advertised drug’s effects. Consumer psychology research sheds some light on why FDA regulation may be inadequate to protect DTCA viewers. This research suggests advertising persuades by means other than the explicit drug claims that are subject to FDA scrutiny.
Music and imagery may generate positive attitudes and beliefs independent of a commercial’s factual content (Biegler and Vargas 2013). Music and imagery have been termed the “nonpropositional” content of ads, because they can persuade without making speciﬁc truth-assessable claims. Some argue that non-propositional content ought to attract the same regulatory scrutiny as explicit claims about drugs (Biegler and Vargas 2013). The capacity of non-propositional content in DTCA to inﬂuence viewer beliefs would seem to make it a legitimate target for the epistemic ally minded censor.
But censorship of non-propositional content in DTCA raises a problem of consistency. Would this mandate censoring a range of other advertising that potentially misleads through non-propositional content? One way to resolve the difﬁculty is to focus on the importance of material facts. Failure to accurately grasp material information is detrimental to autonomous choices. While drug information is likely to be material to most, so too are facts about, for example, health insurance and ﬁnancial advice. So a case might be made that non-propositional content be vetted in advertisements for such products, while leaving products with lesser capacity for harm, dishwashing detergent for example, free from censorship.
Whether a prudential standard justiﬁes bans on DTCA is still widely debated. Research on subtle persuasion in advertising is, however, generating ﬁndings that inform the epistemic benchmark that ought to govern DTCA permissibility.
Medical practice has changed dramatically in the last few decades. Traditionally, many doctors adopted a stance of “beneﬁcent authoritarianism” (Pellegrino and Thomasma 1988, p. 5). On this principle, doctors knew what was best for patients and could withhold information about treatments they did not favor or about adverse effects that might cause anxiety. This practice is now widely seen as unjustiﬁed paternalism. It is, however, worth exploring if there is any justiﬁcation for doctors to engage in self-censorship.
Contemporary thinking is that people with requisite competence are well placed to understand information about their medical condition and its treatment. Disclosure of material information about proposed treatments and their alternatives enables autonomous medical choices. The link between autonomy and the promotion of individual interests supports the claim that disclosure is essential if patents are to make choices that are in their best interests. Given the doctor’s duty of care to act in the patient’s best interests there is further compelling reason to fully discuss treatment options with patients.
In the thyroid surgery example introduced earlier the doctor might appeal to Mill’s Harm Principle to justify withholding information about the complication of vocal cord paralysis. The doctor might claim that disclosing side effects could make the patient anxious and refuse a beneﬁcial treatment. The claim is not implausible. Imagine a related example of a person considering open heart surgery. Wishing to be fully informed the person watches a video of the surgery, becomes very anxious, and refuses the operation. This seems like an unrealistic emotional response that could harm through the omission of necessary treatment. Reason, perhaps, to censor the video.
It might be objected that such reactions are unlikely. Moreover, anxiety is commonplace in the consulting room and is arguably better managed by exploring the true nature of the fear rather than censoring the worrying information. On this objection, self-censorship remains an example of unjustiﬁed paternalism. But, while doctors may lack justiﬁcation for self-censorship on prudential grounds there does seem scope for self-censorship on epistemic grounds.
Doctors typically speak to patients in layman’s terms so that medical information can be understood more easily. A urinary tract infection might be caused by the bacterium Escherichia Coli but the doctor will often use the simpler term “bug.” In the same vein “melaena” becomes “blood in the stool,” “pre-syncope” translates to “dizziness,” and “microcytic anaemia” is “low blood count.” This self-censorship is arguably permissible under epistemic paternalism if it promotes autonomy through a clearer grasp of material facts. But it might be countered that, should patients be supplied with information in its unadulterated form, they could better seek out alternative opinions from health professionals or through their own research. Perhaps the best option, then, is to also offer patients copies of investigation reports and medical summaries.
Doctors ought, however, to be aware of more subtle ways their methods of disclosure may inﬂuence patients. How doctors frame information can have marked effects on its interpretation. For example, patients favor a surgery far more if it is said to have a 90 % survival rate instead of a 10 % mortality rate, despite the statistics being identical. It is conceivable, too, that doctors might choose the frame that favored their preferred course of treatment. To obviate this effect, presenting information in both frames may be best.
Contemporary reasoning ﬁnds little support for arguments that doctors can permissibly self-censor on prudential grounds under the principle of therapeutic privilege. Self-censorship on epistemic grounds seems more defensible if it enhances patients’ understanding. However, given the potential utility of precise medical information, there is also a case to provide patients with unabridged medical summaries.
The high value attached to free speech makes it the default assumption in cases of potential censorship. The burden of proof for a warrant to censor information resides with the aspiring censor. There are two plausible benchmarks for justifying censorship. A prudential standard holds that censorship of information is justiﬁed should signiﬁcant harms be prevented as a result. The standard is, however, vulnerable to questions about the nature and degree of the harms that might warrant censorship.
An epistemic standard holds that communication may be limited should greater truth possession result. The epistemic standard must be defended against questions about the competence of those vetting information and their potentially manipulative reasons for doing so. It must also contend with an epistemic pluralism that accords legitimacy to a variety of accounts of knowledge.
Whether censorship ought to prevail in the cited controversies in global bioethics is a question of the degree to which it both prevents real harm and enhances actual understanding. Ultimately, that question is best answered through wide community debate and deliberation grounded in a full airing of the competing arguments.
- Aikin, K. J., Swasy, J. L., & Braman, A. C. (2004). Patient and physician attitudes and behaviors associated with DTC promotion of prescription drugs-summary of FDA survey research results. Food and Drug Administration Centre for Drug Evaluation and Research. http://www.fda.gov/downloads/Drugs/ScienceResearch/ResearchAreas/DrugMarketingAdvertisingandCommuni cationsResearch/UCM152860.pdf
- Biegler, P., & Vargas, P. (2013). Ban the sunset? Nonpropositional content and regulation of pharmaceutical advertising. American Journal of Bioethics, 13(5), 3–13.
- Buchanan, A. E. (1978). Medical paternalism. Philosophy and Public Affairs, 7(4), 370–390.
- Food and Drug Administration. (2007). Improving public health through human drugs: Center for Drug Evaluation and Research. http://www.fda.gov/down loads/AboutFDA/CentersOfﬁces/CDER/WhatWeDo/ UCM121704.pdf. Accessed 19 Jan 2015.
- Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46(2), 107–119.
- Gilbody, S., Wilson, P., & Watt, I. (2005). Beneﬁts and harms of direct to consumer advertising: A systematic review. Quality and Safety in Health Care, 14(4), 246–250.
- Goldman, A. I. (1991). Epistemic paternalism: Communication control in law and society. The Journal of Philosophy, 88(3), 113–131.
- Goldman, A. I., & Cox, J. C. (1996). Speech, truth, and the free market for ideas. Legal Theory, 2, 1–32.
- Health Care Complaints Commission. (2014). Public statement – Warning about the Australian Vaccination-skeptics Network, Inc. http://www.hccc. nsw.gov.au/Hearings—decisions/Public-statements-andwarnings/Public-statement—warning-about-the-Austra lian-Vaccination-skeptics-Network–Inc—AVN——for merly-known-as-Australian-Vaccination-Network-Inc-. Accessed 30 Oct 2014
- Herfst, S., Schrauwen, E. J., Linster, M., Chutinimitkul, S., de Wit, E., Munster, V. J., .. . Fouchier, R. A. (2012). Airborne transmission of inﬂuenza A/H5N1 virus between ferrets. Science, 336(6088), 1534–1541.
- Kaptchuk, T. J., Friedlander, E., Kelley, J. M., Sanchez, M. N., Kokkotou, E., Singer, J. P., .. . Lembo, A. J. (2010). Placebos without deception: A randomized controlled trial in irritable bowel syndrome. PLoS One, 5(12), e15591.
- Mill, J. (2012). On Liberty. Simon and Brown. www. simonandbrown.com
- Pellegrino, E. D., & Thomasma, D. C. (1988). For the patient’s good: The restoration of beneﬁcence in health care. New York: Oxford University Press.
- Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.
- Scanlon, T. (1972). A theory of freedom of expression. Philosophy and Public Affairs, 1(2), 204–226.
- Ventola, C. L. (2011). Direct-to-consumer pharmaceutical advertising: Therapeutic or toxic? Pharmacy & Therapeutics, 36(10), 669–684.
- Yang, J. (2013). Mutant virus sparks bioethics debate. Toronto Star. http://www.thestar.com/news/world/2013/02/10/mutant_virus_sparks_bioethics_debate.html. Accessed 30 Oct 2014.
- Greene, J. A., & Herzberg, D. (2010). Hidden in plain sight marketing prescription drugs to consumers in the twentieth century. American Journal of Public Health, 100(5), 793–803.
- Selgelid, M. (2007). A tale of two studies: Ethics, bioterrorism, and the censorship of science. Hastings Center Report, 37(3), 35–43.
Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.