Scientific Misconduct Research Paper

This sample Scientific Misconduct Research Paper is published for educational and informational purposes only. Free research papers are not written by our writers, they are contributed by users, so we are not responsible for the content of this free sample paper. If you want to buy a high quality paper on argumentative research paper topics at affordable price please use custom research paper writing services.

Abstract

Scientific (or research) misconduct has become a global concern. This entry reviews famous cases of misconduct or alleged misconduct; definitions of misconduct; policies for reporting, investigating, and adjudicating misconduct; incidence and causes of misconduct; and strategies for preventing misconduct.

Introduction

In the last 30 years, scientific research has become increasingly global in scope. Following World War II, the USA, the former Soviet Union, the UK, France, and other European nations were the leading sponsors of scientific research. Today, the list of top research funders includes China, Japan, South Korea, Taiwan, India, Singapore, Australia, Turkey, Brazil, Iran, and other nations outside of the USA and Europe. Many research projects involve collaborations among scientists from different countries and scientific journals publish articles from all over the world.

Globalization affects all aspects of scientific research, including ethical conduct. In the 1980s and 1990s, ethical scandals in federally funded research in the USA made scientists and government officials aware of the need to address research misconduct. In the twenty-first century, scandals have arisen in many different countries outside the USA, including Canada, China, Denmark, Germany, Italy, Japan, the Netherlands, South Korea, and the UK. Scientific (or research) misconduct can compromise the integrity of the research record, erode the public’s trust in science, and threaten public health and safety. It is important, therefore, for countries around the world to address research misconduct (European Science Foundation 2008; Ana et al. 2013; Resnik and Master 2013; Shamoo and Resnik 2014).

Some Well-Known Cases Involving Misconduct Or Allegations Of Misconduct

Research misconduct is by no means a contemporary problem. In 1830, British mathematician, inventor, and philosopher Charles Babbage published a book titled The Decline of Science in Modern England in which he rebuked his peers for dishonest research conduct. Babbage distinguished between forging (making up data), trimming (removing data that is inconsistent with one’s hypothesis), and cooking (designing an experiment such that it is not a genuine test of a hypothesis) (Shamoo and Resnik 2014).

One of the earliest misconduct cases involved museum curator Charles Dawson’s discovery of skull bones in the gravel beds of Piltdown, East Sussex, UK, in 1912. Dawson claimed that the bones were fossils that provided evidence of a “missing link” between humans and apes. The discovery was controversial from the outset, and many scientists doubted that the fossil was genuine. The Piltdown man was proven to be a hoax in 1953 when recently developed chemical techniques showed that skull was a human cranium combined with the lower jawbone of an orangutan. The skull had been aged artificially to make it appear older than it was (Shamoo and Resnik 2014).

In 1974, William Summerlin was conducting skin transplantation experiments at the Sloan Kettering Institute in New York in Robert Goode’s immunology laboratory. Summerlin’s research involved transplanting patches of skin from black-haired mice onto white-haired mice. He claimed that culturing the tissue prior to transplantation lowered the risk of rejection. A technician who was cleaning the animals’ cages discovered the black-colored patches of hair on Summerlin’s white mice could be washed away with alcohol. The technician reported the finding to Goode, whom initiated an investigation. Summerlin admitted to fabricating data by using a felt-tip pen to draw patches of black hair on the white mice. The committee investigating the incident found that Summerlin had fabricated data in other experiments and required him to take a medical leave of absence. The scandal damaged Goode’s reputation, even though he was not implicated in it (Shamoo and Resnik 2014).

In 1983, two science journalists, William Broad and Nicholas Wade, published a book, Betrayers of Truth, which raised awareness about fraud and deception in science. The book discussed the Piltdown and Summerlin cases and questioned the integrity of scientific icons, such as Galileo Galilee, Isaac Newton, and Gregor Mendel (Broad and Wade 1983). Broad and Wade argued that Robert Millikan, who won the Nobel Prize in Physics in 1923 for measuring the charge on an electron, had acted dishonestly. To measure the charge of an electron, Millikan dropped negatively charged oil droplets through positively charged plates. When a droplet was suspended in the air, the electrical force would be equal to the force of gravity. Millikan was able to determine the charge of an electron by calculating these forces. Historians who examined Millikan’s laboratory notebook for these experiments found that he did not report 49 out of 189 observations (26 %) that were marked as “fair” or “poor” in his notebook, even though he said he reported all of his observations in the paper. Although Millikan’s results have been validated many times by other scientists, Broad and Wade argued that his conduct was deceptive. However, others have argued that Millikan did not falsify data. He had a good understanding of his equipment and knew when it was working properly. He probably decided not to report observations resulting from experimental error. While he should have discussed issues pertaining to experimental error in his paper, he did not conduct fraudulent research (Shamoo and Resnik 2014).

One of the first well-known cases involving an international collaboration took place in the mid-1980s. The case involved Robert Gallo, from the National Cancer Institute (NCI) of the S National Institutes of Health (NIH), and Luc Montagnier from the Pasteur Institute in France.

The two investigators were working together on isolating a virus thought to cause acquired immunodeficiency syndrome (AIDS). Gallo and Montagnier exchanged cell lines they had been culturing in their laboratories, which they believed were infected with different strains of the virus, and they published papers on the human immunodeficiency virus (HIV) in the same issue of the journal Science. When genetic tests revealed that the strains from the different laboratories were nearly identical, Montagnier accused Gallo of stealing his strain and passing it off as his own. An investigation of Gallo found that he did not commit misconduct. The most likely cause for the genetic similarity between the strains is that both cell lines had been infected by a third, vigorous strain in Montangier’s laboratory. The US and French governments reached an agreement that named Gallo and Montagnier as codiscoverers of HIV and required the sharing of patent rights for HIV blood tests (Shamoo and Resnik 2014).

A case that had a significant influence on the development of US federal government policies took place in the 1980s at the Whitehead Institute, a research center operated by the Massachusetts Institute of Technology (MIT) and Tufts University. Nobel Prize winning molecular biology David Baltimore and five coauthors published a paper in the journal Cell in 1986 in which they claimed to show how to use gene transfer methods to induce immune responses in cells. The NIH funded the research. Thereza Imanishi-Kari was an assistant professor who had conducted many of the key experiments reported in the paper. Imanishi-Kari’s postdoctoral student, Margot O’Toole, had trouble replicating the experiments, so she asked to see Imanishi-Kari’s laboratory notebooks. When O’Toole could not reconcile the data recorded in the notebooks with the data reported in the paper, she accused Imanishi-Kari of fabricating and falsifying data. Internal investigations by MIT and Tufts found that there was no evidence of misconduct, but an investigation by the Office of Research Integrity (ORI), which oversees NIH-funded research, found that misconduct had been committed. A Congressional committee that was looking into fraud in NIH-funded research also investigated the case, which was reported on the front pages of the New York Times. In 1996, a federal appeals panel found that there was not sufficient evidence to prove the Imanishi-Kari had committed misconduct, and it overturned the ORI’s finding. Imanishi-Kari admitted to keeping poor record keeping practices, but not to misconduct. Although Baltimore was never implicated in this case, the adverse publicity damaged his reputation. Testifying before a Congressional committee, Baltimore described the affair as a witch-hunt (Shamoo and Resnik 2014).

Another case involving an international collaboration came to light in 1993, when Roger Poisson, a professor of Surgery at the University of Montreal, admitted to fabricating and falsifying data for 99 patients enrolled in the NIH-funded National Surgical Adjuvant Breast and Bowel Project (NSABP), a large multicenter NCI study led by Bernard Fisher from the University of Pittsburgh. Poisson admitted to changing his patients’ medical data so that they would qualify for the study and receive experimental treatment. The misconduct was discovered when some NSABP statisticians noticed some inconsistencies in Poisson’s data. NSABP scientists reanalyzed that data after removing Poisson’s data and found that his misconduct had no effect on the overall results (Shamoo and Resnik 2014). The University of Pittsburgh and the NCI accused Fisher of knowingly publishing fake data, but the ORI found that there was no evidence that Fisher had committed misconduct. Fisher’s reputation was damaged as a result of these investigations and public disclosures, and he sued the NIH, the University of Pittsburgh, and ORI for defamation. The lawsuit was settled out of court in 1997 for $3 million. The case spurred efforts by the Canadian government to develop research ethics policies (Shamoo and Resnik 2014).

Another case involving international collaborations occurred in 2002, when an investigatory committee at Bell Laboratories found that Jan Hendrik Schön, a rising star in the fields of condensed matter physics and nanotechnology, had faked data in at least 17 publications. Schön had been publishing in top scientific journals, such as Science, Nature, and Physical Review Letters, at the unbelievable rate of one paper every 8 days. Dozens of his papers were retracted. Schön came to Bell Laboratories from the University of Konstanz in Germany. In 2004, the university withdrew Schön’s Ph.D. after a committee found that data reported in his dissertation were also fraudulent (Shamoo and Resnik 2014).

In 2003, researchers accused Bjørn Lomborg, an adjunct professor at the Copenhagen Business School in Denmark, of scientific dishonesty related to the publication of his book The Skeptical Environmentalist in 2001. Lomborg’s book challenged the consensus view among scientists that human-caused climate change will have dire consequences for the environment, the economy, and society. The researchers argued that Lomborg had fabricated, misrepresented, and misinterpreted data in the book. The Danish Committee on Scientific Dishonesty ruled that Lomborg had committed scientific dishonesty, but the Ministry of Science, Technology, and Innovation overturned its ruling on the grounds that there was not sufficient evidence to support it, and the definition of dishonesty was too vague. In response to the Lomborg affair, Denmark developed new regulations that limit the scope of scientific dishonesty to fabrication, falsification, or plagiarism or other serious deviations from good research practice (Resnik and Master 2013).

A case that had reverberations across the globe took place in 2005. In 2004 and 2005, a research group led by Woo Suk Hwang, a professor at Seoul University in South Korea, published two papers in the journal Science reporting the derivation of human embryonic stem (HES) cell lines by therapeutic cloning. If confirmed, the finding would be a major breakthrough in stem cell science. In December 2005, the editors of Science received a tip from an anonymous informant that some of the images of the cell lines reported in the 2005 paper had been faked. Shortly thereafter, Sung Roh, a member of Hwang’s team, told reporters that 9 out of 11 cell lines reported in the 2005 paper were fabricated. A committee at Seoul University began investigating Hwang’s research and found that all of the data in both papers had been faked. Hwang’s papers were withdrawn, and he resigned his position at Seoul University. In 2006, Hwang was convicted of fraud, embezzlement, and breach of bioethics laws, but his sentence was suspended. A committee from the University of Pittsburgh found that Gerald Schatten, a faculty member who had collaborated with Hwang, had no involvement in the data fabrication, but that he had neglected his responsibilities as an author by not reviewing carefully the data and manuscript (Shamoo and Resnik 2014).

Several factors made it difficult to investigate the Hwang case. First, Hwang was a national hero in South Korea and received tremendous support from the media, the government, and university officials. Many of Hwang’s peers did not want to criticize his research because he was bringing a great deal of money and prestige to Seoul University. Second, South Korean universities did not have adequate policies or procedures for reporting or investigating misconduct. Scientists who suspected fraud feared retaliation if they made an allegation against Hwang. Suspicions were initially reported to the media, not to university officials. Third, deference to authority is part of the South Korean culture, and many subordinates did not want to challenge Hwang. Fourth, South Korean universities did not have adequate programs in place to teach students about research ethics. After the Hwang case, South Korea initiated a number of reforms to promote research integrity (Kim and Park 2013).

Another case that emerged in 2005 involved Eric Poehlman, a professor at the University of Montreal. Poehlman had previously held positions at the University of Vermont and the University of Maryland. An investigation by the University of Vermont and ORI found that Poehlman fabricated or falsified data on 15 federal grant applications worth $2.9 million and 17 publications. The justice department also brought charges against Poehlman for defrauding the federal government. Poehlman agreed to a comprehensive legal settlement that addressed criminal, civil, and administrative actions brought against him. Under the terms of the settlement, Poehlman agreed to serve 1 year and 1 day in federal prison, to be barred for life from receiving federal grants, and to pay $180,000 to the government for restitution. He also agreed to pay $16,000 to the lawyer of Walter Denino, the student whom accused him of misconduct after he became suspicious of changes that Poehlman had made to a data spreadsheet. Poehlman’s papers were also retracted (Shamoo and Resnik 2014).

A case that had adverse impacts on the health of children unfolded over the span of more than a decade. In 1998, British surgeon Andre Wakefield published a paper in the journal Lancet claiming that exposure to the measles, mumps, and rubella (MMR) vaccine caused 12 healthy children to develop autism and intestinal problems. Members of the anti-vaccine movement hailed the paper as definitive proof that vaccines cause autism and other health problems. As a result, vaccination rates in the UK and other countries declined significantly. In 2004, journalist Brian Deer published an article in the Sunday Times in which he accused Wakefield of failing to disclose that his vaccine research was supported by a law firm preparing a lawsuit against MMR manufacturers and of not obtaining ethics board approval for his study. In response to these allegations, the UK General Medical Council (GMC) investigated Wakefield and decided to revoke his license for acting dishonestly by not disclosing significant financial interests and for ordering risky medical procedures, colonoscopies, and lumbar punctures, without appropriate qualifications or ethics board approval. In 2010, Lancet retracted the autism paper. In 2011, Deer published an article in the British Medical Journal claiming that Wakefield had falsified data in the paper. Deer reviewed the medical records of the children in the study and found that normal pathology results were changed to colitis in nine cases, three children reported as autistic did not have autism, and five children who were reported as normal prior to exposure to the vaccine already had developmental problems. Wakefield, who has sued Deer and the British Medical Journal for libel, claims that he did not falsify any data. He continues to provide advice and support to antivaccine groups (Shamoo and Resnik 2014).

Defining Misconduct

The most important conceptual issue pertaining to research misconduct is how to define it. To help clarify this issue, it is useful to distinguish between misconduct as an ethical and legal concept. Research misconduct as an ethical concept is simply behavior that violates accepted ethical standards for research. Misconduct in this sense is wrongful or unethical behavior (Resnik 2003). While this definition may be useful for teaching students about research ethics and developing codes of conduct, it is nearly impossible to enforce because it is excessively broad and vague. Many organizations have decided to legally enforce some types of serious violations of research norms that they classify as misconduct. This type of misconduct is behavior that violates certain types of legal rules. These rules may include polices adopted by institutions, funding organizations, or journals or various regulations or laws. For example, someone who violates US federal misconduct rules may lose federal funding. Someone who violates a university’s rules against misconduct may lose employment. Someone who commits misconduct may also be charged with fraud if their behavior meets the definition of this concept. Fraud is a legal concept that can be defined as causing harm by misrepresenting a matter of fact (Resnik 2003). Someone who commits fraud may be liable under criminal or civil law. Misconduct proceedings conducted by institutions usually fall under contract law, whereas proceedings conducted by federal agencies fall under administrative law.

It is also important to distinguish between misconduct and questionable research practice (QRP) and good research practice (GRP). Scientific behavior ranges from ethical conduct (i.e., GRP) on the one hand to highly unethical conduct (i.e., misconduct) on the other. QRPs fall somewhere in between these polar ends of the spectrum. QRPs are behaviors that are ethically suspect or controversial, but not widely recognized as highly unethical. Some examples of QRPs include: omitting data outliers from one’s analysis without providing an adequate explanation, overstating the significance of one’s results, merely acknowledging an individual in a manuscript even though they deserved authorship credit, failing to keep adequate research records, violating the confidentiality of peer review, and not disclosing a conflict of interest to a journal (Shamoo and Resnik 2014).

During the 1980s and 1990s, US federal agencies defined misconduct in research as fabrication of data, falsification of data, plagiarism (FFP), or other serious deviations from accepted scientific practices. After several years of debate, the US government dropped the “other serious deviations” category on the grounds that it was too vague and difficult to enforce. Also, some serious deviations such as sexual harassment, theft, or violations of human or animal research regulations may be covered by other policies (Resnik 2003). The definition currently used by federal agencies defines misconduct as simply FFP. Misconduct must be committed knowingly, intentionally, or recklessly, and does not include honest error or scientific disagreement. Fabrication is making up data; falsification is changing or omitting data or misrepresenting research by manipulating materials or processes; and plagiarism is appropriating another’s ideas, words, results, or processes without giving proper credit (Office of Science and Technology Policy 2000).

While the federal definition of misconduct has considerable influence, it is not universally accepted. Nearly 60 % of US universities have definitions of misconduct that go beyond FFP. Some of the other behaviors that universities classify as misconduct include serious deviations from accepted research practices, significant or material violations of regulations, misuse of confidential information, interfering with a misconduct investigation, inappropriate authorship, and misappropriation of property (Resnik et al. 2014). Other countries also have adopted definitions of misconduct that include behaviors other than FFP (European Science Foundation 2008). For example, the UK Research Council’s (2012) definition of misconduct includes FFP and inappropriate authorship and failure to exhibit due care for human or research subjects; Canada’s includes destruction of research records and mismanagement of conflicts of interests (Tri-Council Agency 2011); China’s includes violating submitting false résumés (Zeng and Resnik 2010); and as noted earlier, Denmark’s includes other serious deviations from good research practice.

Disagreements about how to define research misconduct could pose ethical and legal problems for international research collaborations, since a type of behavior treated as misconduct in one nation might not be in another. Collaborators from different countries might be unsure about how to deal with a type of behavior that is defined as misconduct in one place but not in another. One could try to deal with this issue by following local definitions (i.e., “when in Rome, do as the Romans do”), but situations might arise, such as unethical behavior in cyberspace or between nations, where locality would be unclear. To avoid confusions like this, the research community should seek to develop a universally recognized definition of misconduct. If this goal is not achievable, then scientists should try to develop a common core definition of misconduct (e.g., FFP) that would form the basis of other definitions. In the last few years, scientists and government leaders from around the globe have held conferences to discuss research integrity issues. A result of these efforts was the development of the Singapore Statement on Research Integrity in 2010. The Singapore Statement includes some useful ethical principles, but it does not define misconduct (Singapore Statement 2010).

Reporting, Investigating, And Adjudicating Misconduct

It is important to have fair and effective procedures for reporting, investigating, and adjudicating research misconduct to promote scientific integrity and protect the rights of the parties involved. The US government has developed policies for recipients of federal funding to follow. These policies, which have served as a model for others adopted throughout the world (Resnik and Master 2013), generally involve four stages: informal assessment, inquiry, investigation, and adjudication. During the first stage, someone suspects that misconduct has occurred and makes a report in writing to an institutional official (such as a department chair) who relays this report to the person in charge of research integrity, ethics, or compliance at the institution. The research integrity official will assess the report to determine whether the allegation fits the definition of misconduct and has some evidential support. If the research integrity official determines that the allegation fits the definition and has some evidential support, then he or she will appoint a committee to conduct an inquiry to determine whether there is enough evidence to warrant an investigation. The inquiry committee may sequester and examine research records and interview witnesses. If the committee determines that there is enough evidence to warrant an investigation, then the research integrity official will appoint an investigation committee. This committee may also examine records, interview witnesses, and will send its findings to the research integrity official. When the official receives the findings of the investigation committee, he or she will decide how to adjudicate the matter. If the committee finds there was no misconduct, then the matter is over. If the committee makes a finding of misconduct, then the research integrity official must decide what sort of punishment to administer (e.g., termination of employment, supervision of research, education/training). If the research has been funded by a federal agency, the official will send a report to the agency to review. The agency may accept the report, require additional evidence or deliberation, or even conduct its own investigation. If the agency finds that there was no misconduct, then the matter is ended. If the agency finds that misconduct has occurred, then it may impose sanctions, such as denial of federal funding for a period of time. It will also publish an official finding of misconduct that will be made available to the public (Shamoo and Resnik 2014).

There are several important points to note about misconduct proceedings. First, misconduct proceedings are supposed to be kept confidential to protect the rights and reputations of the accused and other parties. As seen in some of the cases discussed above, confidentiality is, unfortunately, not always maintained, and the reputations of innocent parties have been damaged. Second, defendants have rights to due process that must be respected. They have a right to seek legal counsel, to question witnesses, and to appeal decisions. Sometimes witnesses and those who report misconduct (i.e., whistleblowers) also hire attorneys. Third, whistleblowers should be protected against retaliation. Unfortunately, whistleblowers sometimes suffer adverse consequences from their actions. Those who do not experience direct retaliation may develop a reputation as a troublemaker and have difficulty finding work or lose funding if their supervisor is found to have committed misconduct. To continue their scientific careers, whistleblowers may need to find new supervisors or transfer to different institutions. Fourth, to provide additional protection for whistleblowers, some institutions permit anonymous misconduct allegations. However, it may not be possible to maintain anonymity if the allegation leads to an inquiry or investigation, since the accuser may need to provide testimony. Fifth, misconduct proceedings can consume a great deal of time, money, and energy. As mentioned above, the Imanishi-Kari case lasted 10 years. Sixth, if researchers have published a paper that has been impacted by misconduct, they should print a retraction if the results are no longer valid or a correction if the misconduct resulted in a minor error. Publishing a retraction or correction helps to protect the integrity of the research record (Shamoo and Resnik 2014).

Incidence And Causes Of Misconduct

A number of different studies have attempted to estimate the incidence of research misconduct. Estimates from surveys that ask respondents to report if they have direct knowledge of misconduct vary from 3 % to 32 % (Shamoo and Resnik 2014). A problem with these types of studies is that they may overestimate the misconduct rate because respondents may not have good evidence that misconduct has occurred and two different respondents may report the same incident on a survey. Another way of estimating the rate of misconduct is to ask researchers to self-report. A survey of over 3,000 NIH-funded scientists published in the journal Nature found that 0.3 % admitted to falsifying or cooking research data in the last 3 years (Martinson et al. 2005). A systematic review and meta-analysis of survey research found that 1.7 % of scientists had admitted to fabricating or falsifying data at least once (Fanelli 2009). A problem with self-reporting surveys is that they may underestimate the misconduct rate because respondents will not want to admit to engaging in unethical or illegal activity, even if their confidentiality is protected (Shamoo and Resnik 2014). While the incidence of misconduct is probably quite low, it is still a significant concern for researchers, because it has wide-ranging adverse impacts on science and society.

The causes of misconduct can be divided into individual and environmental factors. Individual factors include the desire for success, money, prestige, or career advancement; financial interests; psychological stress and illness; and lack of moral character. Environmental factors include externally imposed pressures to produce results, poor supervision, and oversight of research; cultural variations pertaining to the conduct of research; institutional corruption; poorly managed conflicts of interest; and inadequate policy development and research ethics education programs. Although some researchers believe that only scientists who are mentally unstable or amoral would commit misconduct, evidence suggests that misconduct often occurs when good scientists succumb to pressures to cut corners or bend or break the rules (Shamoo and Resnik 2014).

Preventing Misconduct

The most important strategy for preventing misconduct is to educate students, trainees, staff, and faculty in the responsible conduct of research (RCR). Since the late 1989, the NIH has required graduate, postdoctoral students, and trainees supported by NIH funds to receive instruction in RCR. In 2009, the National Science Foundation (NSF) also began requiring students supported by NSF funds to receive RCR instruction (Shamoo and Resnik 2014). Other countries have also begun to implement RCR educational programs (Ana et al. 2013; Resnik and Master 2013). Education should address not only avoiding misconduct but also a variety of others topics related to good scientific practice, such as data management, record keeping, collaboration, authorship, mentoring, laboratory safety, publication, peer review, conflict of interest, research with human and animal subjects, and social responsibility (Shamoo and Resnik 2014). Educational activities should provide information about concepts, principles, and policies pertaining to research ethics and include discussions of cases. Instructional programs may consist of semester-long classes, workshops, conferences, lectures, and online learning modules. Individual mentoring can also play a key role in RCR education (Shamoo and Resnik 2014).

Another important strategy for preventing misconduct is to develop institutional policies that inform students, staff, and faculty about the expected standards of behavior. Institutional policies should deal with misconduct as well as the other research ethics topics mentioned above and should be publicly accessible. While many universities around the world have already taken steps in this direction, many have not, so further policy development is necessary. Research sponsors and scientific journals should also continue to develop policies that address misconduct and other ethical concerns.

Ethical leadership also plays a key role in preventing misconduct. Institutional leaders include laboratory directors, department heads, deans, vice presidents, and other people involved in the management, supervision, and oversight of research. Leaders can set a positive tone for the organization by modeling ethical behavior and expressing a commitment to ethical values and principles. Leaders who set a negative tone may encourage moral indifference and corruption. Some of the worst scandals in science and business have been the result of unethical leadership (Shamoo and Resnik 2014).

Finally, auditing of research records can help to prevent misconduct. Auditing can be helpful in detecting errors and inconsistencies in research, as well as deliberate violations of laws and institutional policies. Auditing is a standard practice in banking, health care, insurance, air travel, and many other industries. Audits can detect problems that people are not aware of or are not willing to report. Audits can be conducted randomly or for cause (i.e., when a problem emerges) (Shamoo and Resnik 2014).

Conclusion

Research misconduct is a global problem that threatens the integrity and trustworthiness of science and can have negative impacts on society. Scientists, government officials, research sponsors, and journal editors should take steps to prevent misconduct and minimize its impact on science. Some of these steps include adopting a universally recognized definition of misconduct; formulating policies and procedures for reporting, investigating, and adjudicating misconduct; and implementing educational programs in research ethics.

Bibliography :

  1. Ana, J., Koehlmoos, T., Smith, R., & Yan, L. L. (2013). Research misconduct in low and middle-income countries. PLoS Medicine, 10(3), e1001315.
  2. Broad, W., & Wade, N. (1983). Betrayers of the truth: Fraud and deceit in the halls of science. New York: Simon and Schuster.
  3. European Science Foundation. (2008). Stewards of integrity: Institutional approaches to promote and safeguard good research practice in Europe. Strasbourg: European Science Foundation.
  4. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS ONE, 4(5), e5738.
  5. Kim, J., & Park, K. (2013). Ethical modernization: Research misconduct and research ethics reforms in Korea following the Hwang affair. Science and Engineering Ethics, 19(2), 355–380.
  6. Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435(7043), 737–738.
  7. Office of Science and Technology Policy. (2000). Federal research misconduct policy. Federal Register, 65(235), 76260–76264.
  8. Resnik, D. B. (2003). From Baltimore to bell labs: Reflections on two decades of debate about scientific misconduct. Accountability in Research, 10(2), 123–135.
  9. Resnik, D. B., & Master, Z. (2013). Policies and initiatives aimed at addressing research misconduct in high-income countries. PLoS Medicine, 10(3), e1001406.
  10. Resnik, D. B., Neal, T., Raymond, A., and Kissling, G. (2014). Misconduct definitions adopted by U.S. research institutions. Accountability in Research (in press).
  11. Shamoo, A. E., & Resnik, D. B. (2014). Responsible conduct of research (3rd ed.). New York: Oxford University Press.
  12. Singapore Statement on Research Integrity. (2010). Available at http://www.singaporestatement.org/statement. html. Accessed 5 Mar 2014.
  13. Tri-Council Agency. (2011). Tri-Council Agency framework: Responsible conduct of research. Available at http://www.rcr.ethics.gc.ca/eng/policy-politique/frame work-cadre/#311. Accessed 5 Mar 2104.
  14. K. Research Council. (2012). The research ethics guidebook. Available at http://www.ethicsguidebook.ac.uk/ Research-Council-funding-122. Accessed 5 Mar 2014.
  15. Zeng, W., & Resnik, D. (2010). Research integrity in China: Problems and prospects. Developing World Bioethics, 10(3), 164–171.
  16. Redman, B. K. (2013). Research misconduct policy in biomedicine. Cambridge, MA: Massachusetts Institute of Technology Press.
  17. Wells, F., & Farthing, M. (Eds.). (2008). Fraud and misconduct in biomedical research (4th ed.). London: Royal Society of Medicine Press.

See also:

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get discount 10% for the first order. Promo code: cd1a428655