Content Analysis Research Paper

This sample political science research paper is published for educational and informational purposes only. Free research papers, are not written by our writers, they are contributed by users, so we are not responsible for the content of this free sample paper. If you want to buy a high quality research paper on political science at affordable price please use custom research paper writing services.

This sample research paper on content analysis in political science features: 5900+ words (20 pages), APA format in-text citations, and a bibliography with 15 sources.


I. Introduction

II. What Is Content Analysis?

III. Forms of Content Analysis

IV. Issues in Designing Content Analysis Studies

V. Human Coders, Reliability, and Stability

VI. Computer-Assisted Content Analysis

VII. Future Directions

I. Introduction

Content analysis is, as its name suggests, the analysis of the content of communications. Researchers use content analysis to make statements about the meaning, impact, or producers of those communications. Depending on the purpose of the specific research project, analysts may focus on the literal content or seek to extract deeper (or latent) meanings.

This multiplicity of purposes has led content analysts to use a variety of strategies for analyzing text systematically. Some of these strategies, such as word counts, are easy to replicate, whereas other forms are far more interpretive and dependent on the judgment of the individual who codes the text. Most forms of content analysis yield quantitative indicators. Indeed, some would define quantification as an essential aspect of content analysis (e.g., Weber, 1990). Others view it as preferable but not essential (Berelson, 1952; Holsti, 1969).

Content analysis is not new. According to Krippendorff (1980), empirical studies of communications can be dated back to the 1600s. More immediate ancestors to modern content analysis, however, are studies that sought to evaluate the content of mass media in the early 20th century and Nazi propaganda during World War II (Berelson, 1952; Krippendorff, 1980). As a method for studying communications, content analysis has been an especially popular methodology in the field of (mass) communication.

Holsti (1969) reported a trend toward a more frequent use of content analysis, as well as its application to a broader array of problems, including subjects of interest to political scientists. He furthermore noted an emerging tendency for content analysis to be used in combination with other social science research methods and a move toward computer-assisted content analysis.

This research paper emphasizes quantification, although it also discusses some of the trade-offs between quantitative and qualitative forms of content analysis. After discussing definitions and forms of content analysis, the paper describes some of the issues in designing content analysis studies, with the objective of giving the reader the basic tools for evaluating whether this methodology might be useful in her or his research. The paper then turns to the issue of reliability and stability, which are of particular importance when using human coders, and subsequently turns to a discussion of the emerging strategy of using computer assistance in the coding of text for content analysis. The paper ends with an assessment of the future of content analysis in political science.

II. What Is Content Analysis?

According to Weber (1990), content analysis is a “research method that uses a set of procedures to make valid inferences from text” (p. 9). This concise definition captures the essence of content analysis very well, although it may be worth adding that text is not the only content that might be subjected to analysis. (Transcripts of) oral communications, as well as visual communications, could also be subjected to this type of analysis. This research paper, however, limits its scope to the content analysis of text (or at least verbal material) and does not consider the analysis of visual communications.

Beyond making valid inferences from text, most content analysis “seeks to quantify content in terms of predetermined categories and in a systematic and replicable manner” (Bryman, 2004, p. 181; Holsti, 1969). In other words, content analysis endeavors to analyze text in a systematic, empirical manner that is made sufficiently explicit to permit replication. Generally, this means that content analysis proceeds on the basis of instructions that enumerate explicit categories. Consequently, Babbie (2004) has described content analysis as “essentially a coding operation” (p. 318). Although this is accurate, it also sells content analysis short as a method for analyzing the content of communications.

The coding operation is at the heart of content analysis, but content analysis cannot be reduced to coding, just as public opinion research cannot be reduced to the survey instruments often used to ascertain public opinion. The coding of text or other communications permits the analyst to ascertain patterns and test hypotheses about those communications. Holsti (1969) maintained that content analysis “must be undertaken for some theoretical reason” (p. 14). Although not all studies employing content analysis satisfy that condition, such studies are generally undertaken to answer some question that is either of scientific interest or of political (or professional) relevance. In other words, studies may employ content analysis for a variety of purposes. Holsti summarized these purposes into three groupings:

  1. Content analysis may be used to describe characteristics of communications. For instance, a researcher may wish to discern trends in the content of newspaper or other media outlets or analyze the rhetorical style of a decision maker. An example of this type of research in political science is Breuning, Bredehoft, and Walton’s (2005) analysis of academic journal content.
  2. Content analysis may be used to infer psychological or other characteristics of the speaker. For instance, Hermann’s (1980, 2002) Leadership Trait Analysis is based on content analysis. She used a coding scheme that is informed by psychological theories to analyze the (spontaneous) remarks of political leaders in order to make assessments about various personality traits.
  3. It is further possible to use content analysis to assess the (potential) impact of communications. Eshbaugh Soha’s (2006) study of political impact of presidential speeches is an example of this type of analysis.

Within political science, content analysis has been used for all three of these purposes, although not with the same frequency. Studies that use content analysis to evaluate leader personality, motivation, or both, tend to be more plentiful in political science than studies of the impact of communication. Studies that describe the characteristics of communications tend to be more plentiful in the study of (mass) communication than in political science.

A distinct benefit of content analysis is that it is an unobtrusive research method (Babbie, 2004). The advantage of such a research method is that, one, it does not require the cooperation of the subject under investigation, and two, the subject will not alter her or his behavior as a result of awareness of being tested. The second point is important. There is evidence, for instance, that survey respondents on occasion provide socially acceptable answers rather than truthfully reporting their behavior. They may say they voted, because they think they should have, when in fact they stayed home. The first point is relevant to the study of political decision makers and foreign policy decision making. Although it can be useful to understand what motivates decision makers, it is highly unlikely that such individuals would make themselves available for psychological testing. Some researchers (e.g., Hermann, 1980, 2002; Walker, Schafer, & Young, 1998; Winter, 2005) have therefore devised research strategies that rely on content analysis of remarks and speeches in order to evaluate decision makers’ personalities and motivations. It is in this area of political science that content analysis is used most consistently.

Other benefits of content analysis include that it is relatively easy to undertake. It requires no special equipment or access to significant research funds (Babbie, 2004). Whereas survey research can be expensive, a study using content analysis can be completed for very little money. A single investigator with access to the relevant textual material for coding can complete a content analysis study, although a very large content analysis–based study may require multiple human coders or access to content analysis software in order to complete the study in a timely fashion. Paying human coders or purchasing content analysis software would, obviously, add to the expense of implementing a content analysis study.

Further, an investigator can much more easily repeat a portion of the study than would be the case with survey research (Babbie, 2004; Bryman, 2004). This includes the determination that an additional dimension needs to be considered in the analysis. It requires going back through the text to code the additional variable, which can be time-consuming when human coders are used, but it remains feasible as long as the text remains available. Repeating the analysis or adding another dimension to be coded is quite easy when a computer-assisted content analysis strategy is used.

Finally, content analysis lends itself to studying trends over long stretches of time (Bryman, 2004). For instance, Eshbaugh-Soha (2006) analyzed presidential speeches across a 50-year period.

All of these advantages make content analysis useful and attractive. On the other hand, the method is limited to the investigation of text and recorded human communications (Babbie, 2004; Bryman, 2004).

III. Forms of Content Analysis

As already mentioned, content analysis generally refers to quantitative assessment of various aspects of text. This research paper places its emphasis on such analysis. As outlined previously, content analysis is defined by the quest to analyze text in a manner that is systematic, valid, and replicable. The first and last criteria are most easily satisfied through quantification. Although qualitative content analysis may be equally valid in its assessments of text, it is much less likely to be systematic, and it is exceedingly difficult to replicate. Although quantification can have important limitations, it has the distinct advantage of transparency: An explicitly formulated research design for a systematic and quantitative research design not only can be replicated but also allows any reader to evaluate how the investigator arrived at her or his conclusions.

Qualitative analysis does not inherently require such explicit research design and quite often depends on the expertise of the investigator (see, e.g., Neumann, 2008). The advantage of such analysis is that an investigator with substantive expertise may be able to identify nuances that a quantitative analysis misses. On the other hand, qualitative analysis, because it often lacks the sort of explicit coding scheme that is required for quantitative analysis, provides much less of a hedge against investigator bias. This led Babbie (2004) to suggest that an investigator engaged in qualitative content analysis must carefully search for disconfirming evidence—and report meticulously on any elements in the text that are inconsistent with the expected findings—to guard against a tendency to focus on the elements of the textual material that confirm the investigator’s expectations. If this is not done, there is the distinct risk that qualitative content analysis leads an investigator to confirm her or his expectations when these are not in fact supported by the evidence.

Quantitative content analysis, on the other hand, is more explicitly systematic and less dependent on the ability of an investigator to counteract the tendency to focus on confirming evidence. However, it may be less sensitive to contextual and cultural cues. In addition, the validity of quantitative content analysis depends on the adequacy of the coding scheme employed to analyze the text.

First, it is far easier to code manifest than latent content. Manifest content refers to the surface meaning of text, whereas latent content refers to the deeper or symbolic meaning. Content analysis has generally favored a focus on manifest content. Indeed, it has often been defined in terms of explicit and systematic coding rules (Holsti, 1969). In contrast, the analysis of the deeper, underlying meaning of text has more often been the province of those who favor qualitative content analysis or also discourse analysis (Neumann, 2008). As discussed previously, the latter type of analysis is less transparent, and replication is difficult or impossible. The focus on explicit and systematic coding rules does not mean that content analysis avoids interpretation but rather that it separates the data-gathering operation (the coding, counting, or both) from the interpretation of the results.

Second, the coding scheme needs to be carefully designed not only to ensure that it is explicitly stated and replicable, but also to ensure that it is grounded in the research question. In other words, the categories that are employed should be theoretically justified so that the resulting data help the investigator draw valid inferences from the text. What constitutes an appropriate coding scheme will depend on the research question the investigator seeks to test.

The distinction between manifest and latent content is perhaps somewhat artificial. Frequently, coding schemes that are based on relatively straightforward elements, such as word counts, do in fact seek to evaluate some aspect of latent content. In such cases, the specific words that are counted may have been chosen to reveal latent content.

For instance, in an analysis of the rhetoric of two ethnic nationalist Flemish parties in Belgium, Breuning and Ishiyama (1998) counted the references in the party platforms to the terms foreigners and immigrants. They theorized that the latter term has the connotation that the speaker or author perceives the persons who are being described as individuals who will integrate into the society and that the former describes individuals who are perceived as (permanent) outsiders. They further theorized that the extent to which a party preferred to use the term foreigner rather than immigrant when discussing nonnative populations was indicative the party’s xenophobia. In other words, Breuning and Ishiyama used word counts of specific terms to measure the differences in xenophobic attitude between these two ethnic nationalist parties. The coding scheme was transparent and easy to replicate but was designed to evaluate latent content.

The advantage of using carefully chosen but explicit coding schemes to reveal some aspect of latent content is that they are transparent and permit the researcher to show exactly how she or he arrived at the study’s interpretations and conclusions.

In addition to simple word counts, content analysis coding schemes may also record valences—that is, the value associated with the word that is coded. It does, after all, make a difference whether a word is used in a positive or negative construction. The automated coding scheme developed to study decision-makers’ operational codes includes valence measures (Walker et al., 1998).

Another variation is the use of thematic coding. In this case, the investigator is looking for evidence of specific themes in units of text. Theme coding can be tricky, since it involves judgment rather than simple counts. However, there are situations where thematic coding is theoretically justified—and word counts may simply not provide the investigator with an appropriate measure. In the creation of a coding scheme for thematic coding, extra care must be taken to make very clear and explicit the criteria for judging a unit of text to belong in one versus another category. Such a coding scheme may require pilot testing with several different coders to ensure that these coders share a common understanding of the coding instructions. The mechanics of designing a content analysis scheme are explored further in the next section of this research paper.

IV. Issues in Designing Content Analysis Studies

It can be difficult to devise a good content analysis study, although this also depends on the objective of the study. If the objective is to produce a study that is descriptive of some aspect of text, the construction of a coding scheme may not need to be overly complicated. In such a case, manifest content is coded to make systematic observations about that manifest content. At times, such studies have been accused of being atheoretical (Bryman, 2004). Whether this is a problem depends on the contribution a specific study seeks to make. For instance, Breuning et al. (2005) sought to analyze the content of academic journals and did not seek to test any theory. They simply sought to demonstrate in a quantifiable way who and what got published in the journals that they investigated. Such data provide insight into, for instance, the sort of work that is published in specific journals.

If, on the other hand, the study seeks to make inferences about a speaker or author or about the text’s impact, it can be much more difficult to determine what, exactly, needs to be coded. In that case, manifest content is coded in order to make inferences about underlying meaning or latent content. Content analysis studies that are undertaken for such objectives need to be thoroughly grounded in relevant theories, which can help to justify the validity of the measures. For instance, Hermann (1980, 2002) and Winter (2005) devised their content analysis schemes on the basis of psychological theories. Both sought to make inferences about the personality traits of decision makers on the basis of their interview responses and speeches.

How should an investigator go about designing a content analysis study, irrespective of whether she or he seeks to focus on manifest or latent content? There is not one single, correct way to design a content analysis study. The investigator needs to carefully consider a variety of issues in order to design a content analysis that helps her or him find answers to a specific research question. There are several sources that can provide guidance. Hermann (2002) presented a set of eight questions that are designed to help a researcher determine whether content analysis is an appropriate research strategy. Holsti (1969), Krippendorff (1980), and Weber (1990) each discussed many of the trade-offs and mechanical details of designing content analysis studies. All of these sources are useful for investigators who would like more detailed guidance in designing a content analysis study. What follows here is a condensed guide to the sort of issues that investigators need to consider in designing such studies. It is intended to enable the reader to evaluate whether this methodology might be appropriate in her or his own research projects.

The first step is to devise a research question and ask whether content analysis would be an appropriate research strategy. If the question can be answered using text or other communications, then a content analysis may well be a useful strategy. It is important to check at this early stage whether a sufficient volume of appropriate text is available and accessible and in what form it is accessible. If one plans to conduct a computer-assisted content analysis, it will be extremely useful if the materials are available in electronic format. Contemporary text is far more likely to be accessible in such a format than are older documents. This does not make it impossible to conduct a computer-assisted content analysis of older documents, but doing so would require converting such documents to a machine-readable electronic format. That is an extra step that might be time-consuming to complete.

The second step involves decisions about the type of analysis to conduct. Content analysis is about generating data to answer a specific research question. Given one’s research question, is a quantitative or a qualitative approach appropriate? Is it possible to count word frequencies? Or would a thematic coding scheme be more appropriate? It may initially seem easier to devise a thematic coding scheme, but remember that the data generated in this way are usually less easily replicated than word counts.

It is likely to be more difficult to construct vocabularies for a content analysis study that counts relevant words, because the specific words must be carefully chosen for what they can reveal about underlying meaning. Despite the greater effort that needs to go into designing such a coding scheme, it may be well worth the effort to do so. Word counts that are theoretically informed (and designed to capture the latent meaning of text) derive their validity from that theoretical basis. Furthermore, the explicit nature of word count coding schemes makes them easier to replicate as well as easier to complete through computer-assisted coding strategies.

A distinct advantage of using computer-assisted coding strategies is that reliability becomes a nonissue: The computer finds every instance of the words it is programmed to count, so there is no error. Human coders get fatigued, and their attention wanders, leading them to make mistakes.

Although theme coding can be an appropriate strategy, it is far more difficult to have a computer assist in the task. Theme coding therefore tends to involve human coders, who may perform the coding task in inconsistent ways. Just as in producing words counts, human coders who are looking for thematic content may get fatigued and make errors as a result. In addition, thematic coding involves a judgment of the presence or absence of a specific theme in a unit of text. Depending on how explicit the coding instructions are and how well trained the human coders are, such judgments may show variation across different human coders.

In both word count and thematic coding, therefore, it is useful to evaluate the quality of the coding operation by taking two measures: One measures the consistency of each human coder, and other measures the congruity of the decisions made by different human coders. The next section delves into these measures in more detail.

The third step involves creating the coding manual and coding schedule (Bryman, 2004). The former is a detailed and explicit set of instructions to coders. It explains what the unit of analysis is and includes all the possible categories for each dimension that will be coded. For word counts, it will list all the words that will be counted as being indicative of a specific dimension, including synonyms and variations of each word. For thematic coding, the coding manual should not only describe the categories, but also provide an example to help coders understand how to evaluate the text they will be coding.

It is extremely important that the categories for each dimension that will be coded be mutually exclusive and collectively exhaustive. This means that nothing should fit into more than one category simultaneously, and everything should fit into one of the categories. There should never be words, phrases, or themes that are part of a dimension that is being coded that cannot find a home in one of the categories. A clear and explicit coding manual is an important key to a successful content analysis study, especially if multiple coders work on the project, but it is also important to persuade others of the validity of the study.

The coding schedule is simply the form that the coder uses to record her or his coding decisions. Coding schedules may be paper, but it is also possible to use electronic spreadsheets or statistical analysis software instead. In the latter instance, the coder bypasses the need for data entry after completing the coding task.

A well-designed manual not only makes the coding task easier and the results more convincing but also ensures that the study is replicable. It is worthwhile to invest time and effort in the creation of a coding manual, pretesting it on a sample of text and revising it—multiple times if needed—to improve the coding categories or the clarity of the instructions, or both. In the process, it is useful to get feedback from others on the coding manual. If human coders will be completing much of the coding task, it also helps to have them each do a small pretest and check not only whether each coder implements the coding scheme as intended but also whether the different coders make substantially similar decisions.

As you ponder the various aspects of the design of a study, keep in mind whether the proposed analysis does indeed capture the goal of the study. Refer back to the research question and ask whether the coding manual will yield data that can reasonably be expected to shed light on that question.

A further issue to consider in designing the study is whether the text that will be analyzed is representational or instrumental (Hermann, 2002). The former type of text can be assumed to faithfully represent (aspects of) the personality, thoughts, or both, of the creator of that text. The latter type of text is generated for an instrumental reason, such as to persuade an audience, and may not reveal much about the speaker or author of that text. Hermann (1980, 2002) favored spontaneous remarks of decision makers for her content analyses, because her objective was to ascertain personality traits, and she expected spontaneous remarks to be far more representational than prepared speeches. Eshbaugh-Soha (2006), on the other hand, was interested in the instrumental use of language by presidents. For him, the prepared speeches of these decision makers were the more appropriate text. In other words, the text chosen for analysis should match the purposes of that analysis, just as the coding manual should be geared to the analytic purposes of the study.

Last, Babbie (2004) made a useful distinction between the unit of analysis and the unit of observation. If, for instance, you are interested in ascertaining the personality traits of a decision maker, then that decision maker is the unit of analysis—the entity about which you want to be able to make statements as a result of your investigation. To arrive at your conclusions about the individual decision maker, however, you may need to analyze multiple (and often a relatively large number) of spontaneous interview responses. Each of these interview responses is a unit of observation. Although a single interview response can be revealing, it is impossible to know whether it represents a typical statement of the subject under investigation unless it is compared with additional responses. Coding multiple units of observation permits the investigator to discern patterns and to make generalizable statements about the unit of analysis.

V. Human Coders, Reliability, and Stability

The previous section suggests that the use of human coders requires an assessment of the consistency of the coding decisions made by each human coder, as well as an evaluation of the congruity of the decisions made by different human coders. The former is generally referred to as the stability of the coding decisions, and the latter measures the reliability of the coding scheme.

Reliability requires that the coding scheme should lead different human coders to code the same text in the same way. That is, if the coding categories are sufficiently clear and explicit, trained human coders should exhibit little variation in the way they evaluate text. Whether human coders do indeed exhibit such agreement can be evaluated empirically. In doing so, it is important to compare the agreement on specific coding decisions rather than in the aggregate. For instance, when comparing the overall results of two coders, the data may look similar, but this similarity may evaporate when considering the individual coding decisions. If each coder made 100 coding decisions on one dimension and each assigned 50 of these to Category A, 30 to Category B, and the remaining 20 to Category C, their results look identical. However, when comparing Decision 1, Decision 2, Decision 3, and so on, it may become apparent that this overall congruity hides substantially different judgments regarding the discrete coding decisions. It is therefore recommended to analyze the similarity between individual coding decisions.

In addition, the calculation of an intercoder reliability score must take into account the role of chance. The coders must do better than agreement that results from chance in order for the similarity in their coding decisions to be meaningfully attributed to the clarity of the coding instructions. Both Krippendorff (1980) and Holsti (1969) provide detailed guidance for calculating intercoder reliability.

Well-designed and carefully implemented content analysis studies usually report an intercoder reliability statistic to demonstrate that the coding instrument is indeed sufficiently explicit to permit a high level of agreement between the coding decisions made by different human coders.

In addition, content analysis studies are often concerned with stability, which is defined as the consistency of the coder’s decisions across the texts she or he evaluates. Inconsistent decisions can result from coder fatigue but also from slight shifts in the implementation of a coding scheme between the first and last items coded. To evaluate the consistency of the coding decisions across texts and time, coders are sometimes asked to recode some of the text coded early in the project. The coding decisions made by a single coder at these two different times are then compared. If the individual coder makes largely the same decisions, the coding is judged to be stable. The calculation of the stability of the coding can be done using the same statistical tools used to determine the intercoder reliability score. The only difference is that stability measures the consistency of a single human coder, whereas the reliability score measures the consistency of the coding decisions across different human coders.

VI. Computer-Assisted Content Analysis

Content analysis predates the invention of computers. However, the increasing availability and current ubiquity of computers has revolutionized this methodology. Studies that would have required countless hours of meticulous attention to detail by several human coders can now be completed with much greater reliability and at much higher speed by computers.

Numerous software packages for content analysis have been developed. Some of these have been designed for very specific purposes (such as Diction), whereas other programs try to accommodate a variety of content analysis purposes (e.g., Atlas.ti, ProfilerPlus, and Wordstat). New programs enter the market on a regular basis, and older ones disappear. For this reason, this research paper does not include an overview of available programs.

When investigating software for content analysis, carefully evaluate whether the program suits the purposes for which it will be used. Each program has been developed with a particular purpose in mind and will excel at certain things and be less useful for other purposes. In other words, it is impossible to judge the merits of content analysis software in the abstract and without reference to the purposes for which it is being considered.

The speed with which computers can analyze large volumes of text is not the only advantage. Reliability essentially becomes a nonissue, because a computer program will analyze the same text in the same way no matter how often one asks it to analyze that text. This led West and Fuller (2001) to state that the “value of computer-assisted content analysis, particularly in terms of reliability, is difficult to overstate” (p. 91).

At the same time, it is important to note that computer-assisted content analysis depends for its validity on the careful design of the coding scheme. This remains the work of the investigator. A reliably executed computer-assisted content analysis cannot be better than the coding scheme it implements. Human coders are more likely to point out the flaws in the logic of the study’s design, whereas a computer will complete the coding task without question, even if it makes no substantive sense. Hence, it will remain important to obtain feedback on the draft of the coding scheme and to run pilot tests on small amounts of text to ensure that the coding scheme is well designed and appropriate for its purpose. Computers are extremely useful in the mechanical task of coding and can complete such tasks far more reliably than human coders, but the task of theoretically grounding the content analysis, evaluating whether the coding scheme tests what it claims to test, and deciding whether the coding categories meet the appropriate standards of clarity, explicitness, and validity will remain tasks that only the investigator can competently execute.

VII. Future Directions

This research paper endeavors to provide a primer for the sort of research tasks for which content analysis is suited, as well as to suggest basic problems in the design of content analysis studies. The biggest drawback of content analysis has traditionally been that it was very time-consuming to complete large-scale studies. The advent of computer-assisted content analysis has given new life to this research methodology, and numerous software packages have been developed for this technique. Some of the initial software was suitable for only very specific coding tasks that mirrored the developer’s research purposes. More recently, there has been a greater emphasis on the development of software that can be adapted to a multitude of content analysis research designs. Atlas.ti, ProfilerPlus, and Wordstat are part of this new generation of software for content analysis.

One of the critiques of content analysis has long been that it tends to be atheoretical. That was true of some studies but does not adequately capture the more vexing problems that have plagued content analysis in political science. Content analysis is, in some instances, an end in itself. Studies that seek to analyze communicative content systematically, for instance, are examples of this form of content analysis. However, studies that endeavored to analyze the personality characteristics of leaders or the impact of the message often also stopped at analyzing text, when such studies in fact sought to make statements about something beyond the text. Studies focusing on the traits of leaders, for instance, have long claimed that leader personality influences the types of decisions that such individuals make. It is impossible to demonstrate this convincingly on the basis of the results of content analysis alone.

Recent content analysis studies have begun to use computer-assisted content analysis in combination with other variables. Since less time and effort is spent on the completion of the content analysis itself, researchers are more willing to combine the data derived from that analysis with additional variables. Content analysis has thus become a tool for generating data rather than an end in itself.

As computer-assisted content analysis has increasingly become the norm, this methodology has gained an expanded purpose in political science. Content analysis can now be used as a data-making tool that can yield quantitative indicators of aspects of political life that previously were deemed difficult or impossible to measure. For example, rather than claiming that certain personality traits create a disposition toward certain policy responses, investigators can now evaluate systematically whether this is the case. The results of content analysis can now be employed as variables in models that can more directly test the relationship between the traits of leaders and their actions, as well as establish more explicitly the relationship between speeches and their impact on political decisions.

Although there will remain cases where the analysis of the content of text is an end in itself, the more interesting research frontier for political science lies in using this methodology to systematically generate data. Combining this data with additional measures will permit analysts to evaluate the propositions that drew them to content analysis in the first place.

Read more:


  1. Babbie, E. (2004). The practice of social research (10th ed.). Belmont, CA: Wadsworth.
  2. Berelson, B. (1952). Content analysis in communication research. Glencoe, IL: Free Press.
  3. Breuning, M., Bredehoft, J., & Walton, E. (2005). Promise and performance: An evaluation of journals in international relations. International Studies Perspectives, 6(4), 447-461.
  4. Breuning, M., & Ishiyama, J. (1998). The rhetoric of nationalism: Rhetorical strategies of the Volksunie and Vlaams Blok in Belgium, 1991 1995. Political Communication, 15(1), 5-26.
  5. Bryman, A. (2004). Social science research methods (2nd ed.). Oxford, UK: Oxford University Press.
  6. Eshbaugh Soha, M. (2006). The president’s speeches: Beyond “going public.” Boulder, CO: Lynne Rienner.
  7. Hermann, M. G. (1980). Explaining foreign policy behavior using the personal characteristics of political leaders. International Studies Quarterly, 24, 7-46.
  8. Hermann, M. G. (2002). Assessing leadership style: A trait analysis. Hilliard, OH: Social Science Automation. Available at
  9. Holsti, O. (1969). Content analysis for the social sciences and humanities. Reading, MA: Addison Wesley.
  10. Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills, CA: Sage.
  11. Neumann, I. B. (2008). Discourse analysis. In A. Klotz & D. Prakash (Eds.), Qualitative methods in international relations: A practical guide (pp. 61-77). Basingstoke, UK: Palgrave Macmillan.
  12. Walker, S. G., Schafer, M., & Young, M. D. (1998). Systematic procedures for operational code analysis: Measuring and modeling Jimmy Carter’s operational code. International Studies Quarterly, 42(1), 175-190.
  13. Weber, R. P. (1990). Basic content analysis (2nd ed.). Newbury Park, CA: Sage.
  14. West, M. D., & Fuller, L. K. (2001). Toward a typology and theoretical grounding for computer content analysis. In M. D. West (Ed.), Theory, method, and practice in computer content analysis (pp. 77-96). Westport, CT: Ablex.
  15. Winter, D. (2005). Measuring the motives of political actors at a distance. In J. Post (Ed.), The psychological assessment of political leaders (pp. 153-177). Ann Arbor: University of Michigan Press.

Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to order a custom research paper on political science and get your high quality paper at affordable price.

Like this post? Share it!

Need a Custom Research Paper?