This sample Standards for Psychotherapy Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of psychology research paper topics, and browse research paper examples. This sample research paper on Standards for Psychotherapy features: 6100+ words (20 pages), APA format, in-text citations, an outline, and a bibliography with 14 sources.
Standards of psychotherapy practice refer to what psychotherapists actually do, not to their education or their credentials or experience—although these factors might be precursors for practicing in a way that meets standard of ethics and efficacy. To understand these standards, it is first necessary to know what has been established about the effectiveness of psychotherapy. Randomized control studies indicate the existence of particular ‘‘protocol’’ therapies that are effective for specific conditions, and otherwise all types of therapies appear to work equally well (the ‘‘Dodo Bird’’ finding), perhaps due to the quality of the ‘‘therapeutic alliance.’’ It follows that standard dictate what should be done (‘‘hortatory rules’’) when conducting a protocol therapy, but do not provide explicit rules about what to do in other types of therapies, which constitute the vast majority. Instead, the standards for these require that therapists not behave in ways inconsistent with what is known from scientific research, or at least what is believed to be known on the basis of empirical investigations interpreted in a rational manner. Thus, standards are minatory ones that prohibit ‘‘out of bounds’’ behavior. Such behavior occurs, often justified on the grounds that ‘‘we do not know,’’ or that something might or could be true (e.g., recovered repressed memories), or may be shown in the future to be true to at least some extent or other. Standards of practice require, however, behavior consistent with what is currently known.
Outline
I. Introduction
II. Findings on Psychotherapy Effectiveness
A. Standard Randomized Trials
B. Quasi Experiments
C. Regression Effects
D. Rationale for Randomization
E. Conclusion of Randomized Trials
F. Summaries and Meta-Analyses
III. What Findings Imply about Standards
IV. Specifying Unacceptable Behavior
V. Purpose of Standards
VI. Changing Definition of ‘‘Competence’’
VII. Conclusion
I. Introduction
Standards of practice refer to principles governing effective and ethical practice of psychology—whether in a private setting, clinic, or a nonprofit institution. The standards refer to the actual behavior of the individual alleging to apply psychological principles, or experience, or even ‘‘intuition’’ for the purpose of helping other individuals or groups or organizations, most usually for monetary remuneration. The standards refer to what people do—or should do—not to their training, or knowledge, or ‘‘consideration’’ prior to acting. There are also standards of training (often assessed by educational record) and knowledge (often assessed by testing), but they are secondary to practice standards. First, meeting these standards in no way guarantees meeting standards of practice; second, the very existence of these secondary requirements is based on the belief that satisfying them will enhance the probability of high quality and ethical practice. These secondary standards (e.g., ‘‘credentialing’’) can not substitute for standards of practice.
In order to justify these standards of actual practice, it is first necessary to understand what we know about the practices of psychology. Here, I will concentrate on what we know about psychotherapy for mental health problems. Moreover, I will concentrate on knowledge that can be justified by current empirical evidence assessed in a rational manner. A particular practitioner, for example, may have some ideas about psychotherapy and mental health that under later close examination turn out to be valid, or at least useful. If, however, there is no systematic evidence at the time the practitioner applies these ideas, then they cannot be categorized and part of ‘‘what is known’’ in the area of mental health. As will be argued later, the fact that these ideas are not yet validated (and might never be) may not prohibit their use, although it may. The conditions of such potential prohibition will be considered later. What is important at this point is to understand that practice standards must be based on current knowledge, which can be equated with current belief justified on an empirical and rational basis— even though some later time such belief may turn out to be incomplete, flawed, or even downright wrong. People claiming to apply psychological knowledge, however much individual skill may be involved in the application, are bound by the nature of the knowledge at the time they apply it.
II. Findings on Psychotherapy Effectiveness
A. Standard Randomized Trials
There are two basic findings in the area of psychotherapy, the subject of this research paper. Both of these findings are based on standard randomized trials investigations, where subjects are randomly assigned to experimental group versus comparison group and outcomes are assessed using as ‘‘blind’’ a method as possible. The comparison group can consist of a no-treatment control group, of a wait list control group, or of a commonly accepted treatment to which the experimental one is to be compared. The reason for random assignment is that the expectation of the effect of variables not manipulated by the assignment itself is the same for both groups (or for all groups in multiple assessments), and we have greater reason to believe that this expectation will approach reality the larger the sample. The mathematical result is that (critics might argue somewhat circularly) the expectation of deviations from these expectations becomes smaller the larger the sample, so that we can have greater and greater confidence with larger samples that ‘‘nuisance’’ and ‘‘confounding’’ variables do not affect the outcome of the investigation.
B. Quasi Experiments
The alternatives to standard randomized trials often involve matching or some type of quasi-experiment that can lead to plausible but not ‘‘highly justifiable’’ results. Consider, for example, a study using a matching strategy to attempt to assess the effect of having an abortion. We might have two groups of women experiencing unwanted pregnancy who are alike in many of the respects we think may be relevant to how an abortion will affect them (e.g., religion, social class, education, political orientation) and observe how the women in these two groups are different some years later—after most in one group have freely chosen to get an abortion and in the other group have freely chosen not to do so. The problem, however, is that there must be some variable or variables on which the women in these two groups are radically different, given that they made different choices. In fact, this variable or variables must be very powerful given that we have ‘‘controlled’’ for the important variables that we believe are relevant to the choice and its effects. This ‘‘unobserved variable problem’’ is an extremely important one that makes matching studies suspect.
In contrast, true quasi-experiments tend to be valid to the degree in which they approach true randomized ones. For example, consider the ‘‘interrupted time series’’ assessment of providing a psychoactive drug. Individual or individuals are assessed prior to the time the drug is administered on a regular basis and then assessed afterwards. But in the actual clinic setting, the introduction of the drug is often in response to some particular problem or problems that lead the clinician to believe that drug treatment is desirable or necessary. For example, a woman who is raped at a halfway house by a counselor may enter an inpatient treatment facility and—due to her condition—be immediately given Prozac. Now she is calmer. Is this calmness the effect of getting away from the halfway house? Due to the more relaxed inpatient environment itself (which people do not learn to hate immediately)? Due to the Prozac?
C. Regression Effects
A regression effect refers to the fact that if two variables are not perfectly correlated, the one we predict will on the average be closer to its standardized mean than the one from which we make the prediction. (In the linear prediction equation for standard scores, the predicted value of y is equal to r times actual value of x, and conversely the predicted value of x is equal to r times the actual value of y.) Moreover, we have no way of assessing regression independent of other factors, because regression includes ‘‘real world’’ variables responsible for the lack of a perfect correlation. Now if instead the drug is introduced at a randomly determined time, we do not have to worry about such systematic regression effects or confounds. (But we may be justified in worrying about unsystematic ones.) What we are doing when we introduce the manipulation at a randomly determined time, however, is to approximate a truly randomized experiment.
D. Rationale for Randomization
The basic rationale for randomization is that we ideally want to know what would have happened to subjects had they not been assigned to the experimental treatment, a hypothetical counterfactual. Of course, because we cannot both assign and not assign at the same time, a direct assessment of this counterfactual is impossible. Randomly assigning people so that the expectation is that they are equivalent on variables that might affect outcome but in which we are not interested is the best justified substitute for actual knowledge of the hypothetical counterfactual. There are, of course, problems, particularly in the psychotherapy area. For example, it is impossible for the subject to be ‘‘blind’’ to receiving therapy, and often it is impossible for those evaluating its effects to be blind as well. Experiments in this area contrast quite sharply with many experiments in medicine, where subjects are given placebos without being told if they are given the actual drug or placebos (or one of two drugs) and those evaluating their status are also ignorant of the assignment. Another problem is that randomized control experimentation can (should) be conducted only with subjects who agree to be randomly assigned. Perhaps such subjects—or even those who agree to be evaluated in a setting that does not use random assignment— are different from those who will receive or not receive the treatments evaluated. But that problem arises in any evaluation, not just random ones. (While there is the possibility of using ‘‘phony’’ therapy as a ‘‘placebo control,’’ such a procedure raises severe ethical problems.) Thus, not because it is perfect, but because it yields the best knowledge available, we must turn to randomized trials to understand psychotherapy. As the eminent statistician Frederick Mosteller vigorously argued in 1981, while we should strive to appreciate the strengths and weaknesses of different approaches instead of being dogmatic about statistical purity, we must be aware that ‘‘the alternative to randomized controlled experiments is fooling around with peoples’ lives.’’
E. Conclusion of Randomized Trials
Randomized trials of psychotherapy effectiveness yield two rather simple conclusions. The first is that there exists a set of problems for which carefully constructed (‘‘protocol’’) therapies are effective—according to the criteria that at least one randomized trials study has ‘‘validated’’ these forms of psychotherapy. These therapies are listed in a 1995 report from the Task Force on Promotion and Dissemination of Psychological Procedures, division of clinical psychology, American Psychological Association. As the critic Garfield pointed out that same year, these results could not be used as a basis for justifying all—or even most—psychotherapy, or for setting standards. First, the number of conditions and the number of therapies is quite limited, hardly representative of the practice of ‘‘psychotherapy.’’ In fact, the task force itself noted that these types of validated therapies are often not even taught in programs that are listed as good ones in various sources for training graduate students in clinical psychology. The constrained nature of the types of therapy provided overlooks, to quote Garfield (pg. 218): ‘‘the importance of client and therapist variability, the role of the common factors in psychotherapy, and the need to adapt therapeutic procedures for the problems of the individual client or patient.’’ In a highly influential report of a Consumers’ Union study he published that same year of 1995, Seligman refers to such studies as ‘‘efficacy’’ ones. He writes (pp. 965–966) ‘‘In spite of how expensive and time-consuming they are, hundreds of efficacy studies of both psychotherapy and drugs now exist—many of them well done. These studies show, among many other things, that cognitive therapy, interpersonal therapy, and medications all provide moderate relief from unipolar depressive disorder; that exposure and clomipramine both relieve the symptoms of obsessive-compulsive disorder moderately well, but that exposure has more lasting benefits; that cognitive therapy works very well in panic disorders; that systematic desensitization relieves specific phobias; that ‘‘applied tension’’ virtually cures blood and injury phobia, that transcendental meditation relieves anxiety; that aversion therapy produces only marginal improvement with sexual offenders; that disulfram (Antabuse) does not provide lasting relief from alcoholism; that flooding plus medication does better in the treatment of agoraphobia than either alone; and that cognitive therapy provides significant relief of bulimia, outperforming medications alone.’’ But then Seligman compares such studies to what he terms ‘‘effectiveness’’ studies, that is, those of ‘‘how patients fare under the actual conditions of treatment in the field’’ (p. 966)—and finally reaches a conclusion with which few of his critics agree: ‘‘The upshot of this is that random assignment, the prettiest of the methodological niceties in efficacy studies, may turn out to be worse than useless for the investigation of the actual treatment of mental illnesses in the field’’ (p. 974).
F. Summaries and Meta-Analyses
There are, despite the claims of Seligman, a multitude of studies involving random assignment that attempt to assess what is actually done in the field— without limiting the practitioner to following a carefully crafted protocol. See, for example, the article of Laneman and Dawes—discussed in greater detail later—for a description of the diversity of the studies involving random assignment.
These diverse studies have been either summarized qualitatively, analyzed by ‘‘vote counts’’ based on their outcomes, or subjected to meta-analysis to reach general conclusions, because each in fact concerns one type of distress in one setting, often with a single or only a few therapists implementing the procedure under investigation. (Note that the same limitation applies to the ‘‘validated efficacy studies’’ as well.) Most summaries and meta-analyses consider reductions in symptoms that the people entering therapy find distressing or debilitating. Some measure of these symptoms’ severity is then obtained after the random assignment to treatment versus control, and the degree to which the people in the randomly assigned experimental group differ from those in the control group on the symptoms is assessed, and averaged across studies. (Occasionally, difference scores are assessed as well.) The summaries and meta-analyses concentrates on symptoms, which can be justified because it is the symptoms that lead people to come to psychotherapists. The summaries and meta-analyses involves ‘‘combining apples and oranges,’’ which can be justified by the fact that the types of nonprotocol therapies are extraordinary diverse (fruits). For example, simply providing information on a random basis to heart attack victims in an intensive care unit can be considered to be psychotherapy.
The ‘‘classic’’ meta-analysis of psychotherapy outcomes was published by Smith and Glass in 1977, and there has been little reason since that time to modify its conclusions. In general, psychotherapy is effective in reducing symptoms—to the point that the average severity of symptoms experienced by the people in the experimental group after completion of therapy is at the 25th percentile of the control group (i.e. less severe than the symptoms experienced by 75% of the people in the control group after the same period of time). That translates roughly (assuming normality and equal variance of the two groups) into the statement that if we chose a person at random from the experimental group and one at random from the control group, the one from the experimental group has a .67 probability of having less severe symptoms than the one from the control group. The other major conclusions were that the type of therapy did not seem to make a difference overall, the type of therapist did not seem to make a difference, and even the length of psychotherapy did not seem to make a difference. These conclusions are based both on evaluating the consistency of results and evaluating their average effect sizes. These conclusions have survived two main challenges.
The first is that while Smith and Glass included an overall evaluation of the ‘‘quality’’ of the study, they did not specifically look at whether the assignment was really random. To address that problem, Landman and Dawes published a paper in 1982 reporting an examination of every fifth study selected from the Smith and Glass list (which had increased to 435 studies by the time it was given to Landman and Dawes); these researchers concluded—with a very high degree of inter-rater reliability based on independent judgments— that fully a third of the studies did not involve true random assignment. A particularly egregious example involved recruiting students in a psychology department with posters urging group psychotherapy to address underachievement; the authors then compared the students who self-selected for this treatment with some students with similar GPAs ‘‘randomly’’ chosen from the registrar’s list, who for all the experimenters knew have given up and left town. A more subtle example may be found in comparing people who persist in group psychotherapy with people in an entire randomly selected control group. Yes, the two groups were originally randomly constructed, but the problem is that we do not know which people in the control group would have stayed with the group psychotherapy had they been assigned the experimental group—thereby invalidating the control group as a comparison to the experimental one. While it is possible to maintain that it seems bizarre to include in an evaluation of group psychotherapy those who did not actually participate in the groups, if there is really an effect of a particular treatment and assignment is random, then it will exist—albeit in attenuated form— when the entire experimental group is compared to the control group. (A mixture of salt and fresh water is still salt water.) The way to deal with selective completion is to study enough subjects to have a study powerful enough to test the effects based on ‘‘subsets’’ of the people’s assigned to experimental manipulation (e.g., those who completed). Landman and Dawes deleted the 35% of their studies that they believed not to be truly random ones from their meta-analysis, and reached exactly the same conclusion Smith and Glass had earlier.
A second problem is the ‘‘file-drawer’’ one. Perhaps there are a number of studies showing that psychotherapy does not work, or even having results that indicated that it might be harmful, which simply are not published in standard journals—either because their results do not reach standard results of ‘‘statistical significance’’ or because flaws are noted as a result of their unpopular conclusions that might be (often unconsciously) overlooked had the conclusions been more popular. The problem has been addressed in two ways. First, the number of such studies would have to be so large that it appears to be unreasonable to hypothesize their existence in such file drawers. Second, it is possible to develop a distribution of the statistics of statistical significance actually presented in the literature and show that their values exceed (actually quite radically) those that would be predicted from randomly sampling above some criterion level that leads to publication of the results.
Another problem concerns the identity of the psychotherapists. Here, there is some ambiguity, because the studies attempting to ‘‘refute’’ the conclusion of Smith and Glass are generally conceived poorly, in that the psychotherapy subject rather than the psychotherapists themselves are sampled and used as the unit of measurement—especially for statistical test. But if we want to generalize about psychotherapists, then it is necessary to sample psychotherapists. For example, if a standard analysis of variance design is used where therapists are the ‘‘treatment’’ effect, then generalization to therapists—or to various types of therapists— requires a ‘‘random effects’’ analysis rather than a ‘‘fixed effects’’ one. One study did in fact follow this prescription, but then concluded on the basis of a post hoc analysis how more successful therapists were different from less successful ones after finding no evidence for a therapist effect overall!
The results of studies treating each psychotherapist as a separate sample observation generally conclude that beyond a very rudimentary level of training, credentials and experience do not correlate (positively) with efficacy, as summarized by Dawes in his 1994 book, Chapter 4. There is some slight evidence that people who are considered ‘‘empathetic’’ tend to achieve better outcomes (where this characteristic is assessed by colleagues—not in a circular manner by clients who themselves improve); also there is some evidence that when therapists agree to engage in different types of therapy, they do best applying ones in which they have the greatest belief. (It is possible to question the importance of the latter finding, given that outside of randomized control studies, therapists tend to provide only the type of psychotherapy that they believe to be the most helpful to their clients.)
III. What Findings Imply about Standards
The overall conclusion supports the importance of ‘‘nonspecific’’ factors in therapeutic effectiveness. This general result about the quality of the ‘‘therapeutic alliance’’ as opposed to the specific type of therapy has been somewhat derogatorily referred to as the ‘‘Dodo bird finding’’ in that ‘‘all win and all must have prizes.’’ (For the latest explication see the 1994 paper of Stubbs and Bozarth.) The problem is that findings hypothesizing the ‘‘quality of relationship’’ generally lack independent definitions of ‘‘quality’’ or evaluation of exactly which nonspecific factors are responsible for success or failure.
Now let us consider what these two findings— about specific protocol therapies and about nonspecific factors—imply about standards. In a 1996 report from the Hasting Center entitled ‘‘The Goals of Medicine,’’ a panel of international group leaders sponsored by the Institute wrote: ‘‘On necessity, good caring demands technical excellence as a crucial ingredient’’ (p. s12). The protocol therapies clearly demand technically correct implementation as a crucial ingredient; failing to be technically correct completely violates standards of practice.
But what about the other types of therapies? It is very difficult to require technical excellence of ‘‘relationship’’ therapies—which, again, constitute the majority.
What then can be demanded as a standard? Therapists often point out that the research in psychology does not imply exactly what they should do. True, except for the protocol therapies. Conversely, however, research does imply what should not be done, what is ‘‘out of bounds.’’ Thus, research in psychology and related areas implies minatory (‘‘thou shalt not’’) as oppose to hortatory (‘‘thou shalt’’) directives and standards for much of therapy. Of course, it is possible to rephrase minatory statements to become hortatory (e.g., ‘‘thou shalt avoid doing this thing,’’ such as committing murder), but most people recognize the distinction between two types of statements. For example, laws are based on violation of minatory rules, not hortatory ones, and even outside of the legal context we often make the distinction between the morality of simply not breaking rules versus that of doing something positive for our fellow humans. Moreover, people are often willing to engage in compensation between differing hortatory goals, but not ‘‘weight’’ various violations of minatory rules, unless explicitly decided in advance—such as killing in wartime or lying when a spy or carrying messages for Refusniks. We do not, however, talk about ‘‘trade-offs’’ between murder versus achieving some valuable goal (for example, saving the lives of 10 people by slaughtering a homeless man whose body can provide 10 organs to be transplanted in these people who would otherwise die).
Of course, when the boundaries of ‘‘thou shalt not’’ are sufficiently narrow, then minatory directives can become hortatory ones, but that is not common in psychological practice. We find, for example, a positive correlation between peer evaluations of therapist empathy and therapist effectiveness—as mentioned earlier—but we cannot demand a therapist be empathetic all the time: in particular, not that therapists be empathetic ‘‘types’’ of people, which are what their peers are evaluating.
IV. Specifying Unacceptable Behavior
Is there really the possibility of specifying such ‘‘out of bounds’’ behavior? Does it occur? Or does ‘‘anything go?’’ There is a possibility, it does occur, and anything does not go. Such behavior clearly violates standards of psychotherapy—whether the standards are based on our views of ethics or of effectiveness. For example, psychology research shows memory to be ‘‘reconstructive’’ and hence prone to errors that ‘‘make sense’’ of what happened, considered either by itself or in broader contexts such as one’s ‘‘life story.’’ Further, there has never been any research evidence for the concept of ‘‘repression.’’ That absence does not mean it is impossible for someone to ‘‘recover a repressed memory,’’ or that such reconstructed memories are necessarily historically inaccurate. What it does mean is that as professionals practicing their trade—which means applying psychological knowledge as we now know it—therapists should not be involved in attempting to do something that current research evidence indicates can easily create illusion, and needless suffering.
Nevertheless some are. For example, in a survey of licensed U.S. doctoral-level psychologists randomly sampled from the National Register of Health Service Providers in Psychology by Poole, Lindsay, Memon, and Bull in 1995,70%indicated that they used various techniques (e.g., hypnosis, interpretation of dreams) to ‘‘help’’ clients recover memories of child sexual abuse; moreover, combining the sample from the register with a British sample from the Register of Chartered Clinical Psychologist, the authors conclude: ‘‘Across samples, 25% of the respondents reported a constellation of beliefs and practices suggestive of focus on memory recovery, and these psychologists reported relatively high rates of memory recovery in their clients’’ (pg. 426). The study asked about the use of eight techniques that cognitive psychologists have found to involve bias and create errors. Hypnosis, age regression, dream interpretation, guided imagery related to abuse situations, instructions to give free reign to the imagination, use of family photographs as memory cues, instructions for remembering/journaling, and interpreting physical symptoms. Remarkably, with the exception of the last three techniques, the proportion of survey respondents who reported using them was overshadowed by similar or higher proportions of respondents who ‘‘disapproved’’ of using them.
In addition, failure to disapprove of interpreting physical symptoms as evidence of unusual events can be traced to a failure to understand the base rate problem in interpreting diagnostic signs—a failure that has been decried ever since Meehl and Rosen first discussed it in detail in 1955, but which is remarkably robust—as experimental studies in the area of behavioral decision making indicate that people equate inverse probabilities without equating simple ones, even in the face of evidence that these simple probabilities are quite discrepant. It takes one step to move from the definition of a conditional probability to the ratio rule, which states that P(a given b)/P(b given a) P (a)/P(b). For example, the probability of being a hard drug user given one smokes pot divided by the probability of smoking pot given one is a hard drug user is exactly equal to the simple probability of being a hard drug user divided by the probability of smoking pot. Exactly. To maintain that because (it is believed that) a very low base rate event (e.g., being brought up in a satanic cult, an event that may have probability zero) can imply high base rate distress (e.g., poor self-image and an eating disorder) it therefore follows that the distress implies the event is just flat-out irrational. Doing so violates the standard of practice proposed, which is that it be based on empirical knowledge interpreted in a rational manner.
Unfortunately, however, the debate about recovered repressed memories has degenerated into claims and counter claims about whether they can exist, or the—totally unknown—frequency with which they are accurate or invented, rather than around the question of whether attempting to recover them is justified by what is known. In fact it is not; the real question is whether doing so is ‘‘out of bounds’’ behavior, and given we do know a lot about the reconstructive nature of memory, but very little about whether memory of trauma differs from other memories—and if so in exactly what way—such recovery must be categorized as out of bounds, that is, practice that violates standards.
V. Purpose of Standards
The purpose of standards of psychological practice is to aid the client with knowledge-based skills; ignoring knowledge is no more appropriate than having sexual contact with a client. Standards must be extended in a minatory way to prohibit application of ignorance, just as there are minatory standards about the behavior of the therapist that may both harm the client and degrade the profession (e.g., sexual contact). Moreover, a minatory standard can be enforced, and in the current author’s experience on the American Psychological Association Ethics Committee, such standards were indeed the ones enforced. People were kicked out of the American Psychological Association or lost their license to practice (in one order or another) primarily on the basis of sexual contact with the clients, on the basis of having been found guilty of a felony involved in their practice (e.g., cheating on insurance), or on the basis of practicing beyond their area of competence.
VI. Changing Definition of ‘‘Competence’’
The last reason for kicking people out of the association brings up a specific distinction between the standards proposed in the current research paper versus those proposed by the American Psychological Association. (See its Ethics Code published in 1992.) The latter defines ‘‘competence’’ in terms of education, training, or experience. Specifically, principle 1.04 (a) reads that: ‘‘psychologists provide services, teach, and conduct research only within the boundary of their competence, based on their education, training, supervised experience, or appropriate professional experience’’ (italics added). The problem with this definition of competence is that it does not indicate that training must be in something for which there is some scientific knowledge. For example, training in the alleviation of posttraumatic stress disorder (PTSD) could involve people whose trauma was supposedly that of being kidnapped by aliens. In fact, (Dawes, 1994) there is a set of psychotherapists who have exactly this specialty, and one of them mentions the others in the back of her book, others who are licensed and can receive third-party payment for treatment of this type of PTSD.
The other problem with this definition is that it allows a very specific characterization of what is relevant ‘‘training,’’ a characterization that could even exclude generalizations based on scientific studies. For example, Courtois criticized in 1995 those who criticize recovered repressed memory psychotherapists, on the grounds that these critics themselves have not been involved with recovering repressed memory. She writes: ‘‘Unfortunately, a number of memory researchers are erring in the same way that they allege therapists to be erring; they are practicing outside of their areas of competence and/or applying findings from memory analogues without regard to the ecological validity and making misrepresentations, overgeneralizations, and unsubstantiated claims regarding therapeutic practice’’ (p. 297). The criticized claims are, of course generalizations that are based on what is known about memory in general, and the claim that a specific type of memory is inadequately or incorrectly characterized by such generalizations requires assuming a ‘‘burden of proof.’’ Exceptions to rules require evidence that they are indeed exceptions. No evidence is presented. Instead, a statement is made that people who based generalizations on well-established principles derived from empirical research are themselves behaving unethically because they have not been immersed in the context in which these exceptions are claimed to occur. It is a circular argument that can equally well be made against those of us who believe that PTSD researchers who help people recover the memory of being kidnapped by aliens should not be reimbursed from government or insurance funds. Since we ourselves would not even think of conducting such therapy, how can we evaluate it?
The Ethics Code of the American Psychological Association also emphasizes ‘‘consideration of’’ what is known, but it does not mandate applying it. More specifically, for the type of relationship therapy, it does not mandate that psychotherapists should definitely not do what careful consideration indicates they should not. Certainly, training and consideration are precursors to practicing well and ethically, but as pointed out earlier, they cannot be substitutes. The reason that they cannot be substitutes is that the training must be training in that which works, which is then applied. ‘‘Consideration’’ must be consideration of valid knowledge, which is then applied. Again, I’m not claiming that knowledge will not change in the future, or that everything psychologists currently believe to be true is necessarily true. The point is that good practice must be based on the best available knowledge and evidence—not on what might be, could be, or what may turn out to be true after years of subsequent investigation. Moreover, what is believed to be true does provide bounds—minatory standards.
The philosophy espoused in this standards of practice research paper is close to that of the National Association for Consumer Protection in Mental Health Practices. (See its goals as enunciated in 1996 by its President Christopher Barden.) The major difference, if there is one, involves how much emphasis is placed on the clients’ explicit recognition that when the type of therapy is a ‘‘relationship’’ one, there is really no hard evidence that the particular type offered works better than any other. Relationship therapies do work overall, and it is very tricky to obtain ‘‘informed consent’’ about a whole huge category of therapy, while at the same time indicating that particular members of it may not have empirical justification. The additional problem is that by emphasizing that lack for particular members, whatever placebo effects can account for the efficacy of the entire class may be diminished. Avoiding such emphasis in obtaining informed consent is clearly self-serving for the psychotherapist. The question is whether it also serves the client. Rather than just assuming that it does, we could put this question to an empirical test—through randomized trials.
VII. Conclusion
The final point of this research paper is part minatory, part hortatory. The purpose of psychological practice is to provide incremental validity, that is, to help in ways that the clients could not help themselves (at least to increase the probability of such help). The fact, for example, that a flashbulb memory may be corroborated by others does not imply that the practitioner should encourage or interpret such memory, because corroboration by others involves historical accuracy, and the psychologist provides no incremental validity about how such corroboration may be obtained, or what sort of corroboration may validate or invalidate the conclusion that the memory is historically accurate. Incremental validity, however, is both desirable and required, especially in a society that demands ‘‘truth in advertising.’’
A note at the end. This research paper has been devoted to the questions of standards of practice in psychotherapy. It has not dealt with forensic psychology and the subsequent standards of expert testimony in courts and other legal settings. Everything argued here applies to such settings. Because testimony in courts can result in loss of freedom, it is urgent that psychotherapists who do testify meet the standards enunciated in this research paper.
Bibliography:
- American Psychological Association. (1992). Ethical principles of psychologist and code of conduct. American Psychologist, 47, 1597–1611.
- Barden, R. C. (1996). The National Associations for Consumer Protection in Mental Health Practices: Office of the President. Plymouth, MN: Copies available from R. Christopher Barden, PhD., J.D. 4025 Quaker Lane North, Plymouth, MN 55441.
- Courtois, C. A. (1995). Scientist-practitioners and the delayed memory controversy: Scientific standards and the need for collaboration. The Consulting Psychologist, 23, 294–299.
- Dawes, R. M. (1994). House of cards: Psychology and psychotherapy built on myth. New York: The Free Press.
- Garfield, S. A. (1996). Some problems associated with ‘‘validated’’ forms of psychotherapy. Clinical Psychology: Science and Practice, 3, 218–229.
- The Hasting Center. (1996). The goals of medicine: setting new priority. Briarcliff Manor, NY: Publication Department, The Hasting Center.
- Landman, J. T., & Dawes, R. M. (1982). Psychotherapy outcome: Smith and Glass’ conclusions stand up under scrutiny. American Psychologist, 37, 504–516.
- Meehl, P. E., Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting score. Psychological Bulletin, 52, 194–216.
- Mosteller, F. (1981). Innovation and evaluation. Science, 211, 881–886.
- Poole, D. A., Lindsay, D. S., Memon, A., & Bull, R. (1995). Psychotherapy and the recovery of memories of childhood sexual abuse, U.S. and British practitioners’ opinions, practices, and experiences. Journal of Consulting and Clinical Psychology, 63, 426–437.
- Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The consumer reports study. American Psychologist, 50, 965–974.
- Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760.
- Stubbs, J. T., & Bozarth, J. D. (1994). The Dodo bird revisited: a qualitative study of psychotherapy efficacy research. Applied and Preventative Psychology, 3, 109–120.
- Task Force on Promotion and Dissemination of Psychological Procedures. Division of Clinical Psychology, American Psychological Association (1995). Training in and dissemination of empirically-validated psychological treatments. Report and recommendations. The Clinical Psychologist, 48, 3–23.
See also:
Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.