Survey research can be a powerful tool for understanding how and why political attitudes are distributed in specific places, times, and communities. Data on such attitudes can be gathered and analyzed using a wide variety of methods and methodological postures such as direct or indirect questioning, the use of primary- or secondary-source materials, and gathering actual or imputed information.
Such wide variation in the observational and inferential aspects of survey research on political attitudes also renders it particularly complicated. All research that focuses on psychological processes and outcomes almost always involves the use of incomplete data from imperfect sources and the existence of partial information about the data generating process (the personal and procedural factors that cause the data to be what they are). In the US Government (USG), analysts are often tasked with both attitude observation (the gathering of information about attitudes) and measurement (the assignment of value to attitude data already gathered). The collection and analysis of attitude data is also vulnerable to systematic distortions due to the questioner’s own unexamined attitudes about the respondents, the question at hand, or the operational context. All of these factors introduce unique methodological challenges, which themselves include ethical dilemmas and responsibilities.
This paper provides analysts with a road map for examining and addressing the major methodological issues involved in ascertaining “true attitudes” from survey data, particularly in fragile, volatile, or conflict-prone contexts. True attitudes are predispositions that are honestly held, and reflective of the respondent’s sense of the real state of the world; true responses to attitude inquiries (i.e., data) are honestly communicated descriptions of those predispositions.1 The road map is designed to help analysts recognize and troubleshoot:
Technical aspects of quantitative data and analysis often garner the lion’s share of attention when it comes to survey methodology; the other papers in this Framework include detailed coverage of the total survey error paradigm,2 as well as discussion of the range of statistical tools available for survey data analysis. This paper, however, foregrounds the ethical dimensions of gathering and analyzing attitude data as critical concerns for all aspects of both quantitative and qualitative survey research. Embedded power dynamics, issues of informed consent, confidentiality, respondent security, and other ethics-based concerns all impact the inputs and outputs of survey research. The discussion here builds on proliferating guidance and analysis about the ethical dimensions of this work by scholars and practitioners who work in and with conflict-affected spaces and communities.
Though increasingly accessible, reviews of the intertwined nature of the technical and ethical dimensions of survey research in conflict-adjacent research are still very few in relation to the dramatic scope of the underlying concerns. This difference in volume is especially pronounced when compared to the substantial body of literature about mainstream survey methodology in the fields of American politics, psychology, public policy, and economics. The need for a larger body of extant literature that provides guidance, examples, and suggested best practices is perhaps even greater
___________________
1 The phrase “true attitude” may not resonate with scholars or analysts who question whether “truth” can ever be truly ascertained in the context of subjective phenomena.
2 For more information, see “Drawing Inferences from Public Opinion Surveys: Insights for Intelligence Reports,” earlier in this Analytic Framework.
when the key variables that attract interest are attitudes rather than behavior or experience. Anyone attempting to assess political attitudes in the wake of significant safety and security challenges faces an enormous task. The discussion here is intended as an entry point into a fuller discussion of those realities.
The paper proceeds as follows. Part I explores the key defining characteristics of political attitudes and describes some common sources of attitude data. Part II reviews the range of sites and sources of attitude data collection and measures that intelligence analysts are likely to encounter in their work. Part III discusses the complicated nature of “true attitudes” as an empirical concept. Part IV reviews the intertwined nature of ethical and technical considerations in the gathering and analysis of attitude data, particularly in the USG. Part V takes an in-depth look at eight specific ethical and technical challenges that make it difficult to ascertain true attitudes, along with a brief review of mitigation techniques used elsewhere to address these and similar issues. Finally, Part VI concludes the paper with a unified set of general suggestions for responding to the challenges outlined earlier, along with a special note about the promise of metadata as both a data attribute and a mitigation tool.
Political attitudes are cognitive frameworks that individuals use to locate themselves and others in deliberations over matters of community affairs. As “relatively enduring orientations…that provide individuals with mental frameworks for making economical sense of the world” (Hennessy, 1970: 463), attitudes indicate the predispositions and underlying environmental analyses that individuals might use to form the value judgments at the heart of public opinions (Banks, 2014). This is in contrast to values, which refer to the more general, non-evaluative, non-object-specific moral orientations that people hold (Halman, 2007; Van Dyke, 1962). Both attitudes and values can constitute the “why” that underpin an individual’s political opinions, positions, or behaviors.
These are not just academic distinctions. Rather, they speak to the practical multidimensionality of data regarding “what people think.” While many popular sources tend to use the terms “attitude,” “opinion,” and “position” interchangeably, these concepts vary substantively in how readily they reveal an individual’s core moral judgments. While complementary, these are distinct concepts. Consider, for example, investigations into the reasons why individuals might support a citizen insurrection or
violent social movement. In the United States, a 2018 Reuters/Ipsos poll (Eisler, Parker, and Harte, 2018) asked what “Make America Great Again” (MAGA) meant to Trump voters as a means of exploring voter attitudes. The responses indicated which policy positions individuals associate with the phrase (e.g., stronger borders, conservative judges) rather than underlying thought patterns. In contrast, ongoing research by Christopher Parker and Rachel Blum (2021) on MAGA supporter attitudes about politics, the COVID-19 pandemic, and the January 2021 attack on the Capitol measures respondent levels of agreement with statements such as “[R]ioters had good intentions” or levels of concern about “the American way of life disappearing.” Both data collections provide some insight into what MAGA movement supporters think, but the Parker and Blum data provide a more incisive look into respondents’ political motivations while the Reuter/Ipsos data suggest their range of preferred political outcomes.
Political attitudes are deeply contextual and contingent, even as they are relatively enduring. Though past and recent psychological research has shown that attitudes can be quite durable over time (Verplanken and Orbell, 2022), we also know that attitudes can and do change as individuals take in new information, have new experiences, and respond to shifts in their cultural environments (Schwarz, 2007). Just as ideology and public opinion are not simply additive functions of individual attitudes, the economic, social, and institutional forces that bring individual attitudes into dialogue with each other are not only context but also critical components of one’s worldview. The same is true for research into political attitudes. All survey research both absorbs and reflects the cultural and political contexts that facilitate it and as such can significantly influence—for better and for worse—the power dynamics embedded in those contexts (Bond, 2018).
The collection and analysis of data on political attitudes is complicated by their limited observability. Whether in an experimental or observational setting, there can be no fully objective, arms-length measurement of how individuals perceive and navigate their understandings of their world (Bond, 2018). Any research into an individual’s mental, emotional, and psychosocial attributes must be undertaken with a significant degree of scientific and professional humility. Individual respondents are the ultimate and rightful gatekeepers of data that is based on their lives and experiences. Access to those lives and experiences must be granted in some fashion for “data” to exist.
Quantitative and qualitative attitudinal survey data are most commonly gathered through structured questionnaires, though they also can come from interview responses or focus group discussions. Any of these methods can be executed remotely or face to face, with or without computer assistance, and inclusive or exclusive of some type of scientific experimentation. Some researchers rely on big-data and artificial intelligence (AI) approaches to collecting and analyzing other sorts of data (e.g., social media, crowd activity) in order to infer attitudes from behavior (Amaya et al., 2020). AI and big-data methods specifically may introduce unnecessary measurement error, however, through confirmation bias or ecological fallacy (Olteanu et al., 2019). The depth of background information that an analyst has about even a single data point can vary widely, as uncontextualized poll, survey, and interview data from both public and private sources abound.
Much attitude data is collected through large- and small-scale academic research and driven by prespecified research questions or protocols. The Afrobarometer, AmericasBarometer, and Eurobarometer projects are well-known examples of large-scale academic initiatives to collect and analyze multinational quantitative survey initiatives that gather and analyze both country-specific and regional trends in public attitudes; the single-country American National Election Studies project and the global World Values Survey are similarly popular. In these cases and others like them, both the observation and measurement processes are usually relatively transparent and available for public scrutiny. Many of the largest initiatives tend to collect data in regular waves over time and attempt to standardize both question batteries and survey enumeration. Though most academic survey research projects are vetted by an institutional review board, the responsibility for identifying and addressing issues of consent, privacy, confidentiality, or harm—all power considerations inherent in research with humans—lies with the researcher (Blee and Currier, 2011).
Not all data on public attitudes come from scientific research processes, though. Foundations, policy professionals, think tanks, and representatives of both governmental and transgovernmental organizations all collect attitude data that may be of interest to the IC, just like academic researchers do. The Arab Public Opinion Index (produced by the Arab Center in
Washington, DC), the South African Social Attitudes Survey (conducted by the Pretoria-based Human Sciences Research Council), the Pew Global Attitude surveys (produced by the Pew Research Center), and commercial datasets produced by private firms are all examples of nonacademic sources of attitude data. These data may be based on open or closed sources, with varying degrees of restriction on public access to the underlying information, collection methods, and measurement schemes. Raw data may be less available from these sources than from academic sources, so the results of such research are often all that the end-user may be able to access.
Finally, data on public attitudes from journalistic, advocacy-based, or diplomatic sources (e.g., embassies) can be helpful for getting a more specific sense of what people think during or about emerging or acute moments of crisis and change. While a key trade-off is transparency—detailed information about the methodologies used are often less readily available than from other data gatherers—these data are often gathered by individuals with a close relationship to the surveyed population, and their reports of public attitudes are often accompanied by valuable contextual information than can deepen an analyst’s understanding of the situation at hand.
True attitudes and true attitude data reflect the actual framework a respondent uses to make sense of the world. This does not mean that all attitudinal survey responses reflecting something other than that are “false,” though. In myriad ways, survey responses can indicate (un)common, partial, or strategic understandings of the world about which the respondent is being asked. Asking whether data accurately reflect responses to a given instrument is distinct from asking whether a respondent’s knowledge and communications are stable, reliable, complete, and yes, truthful.
Not all attitudes are easily acknowledged, even by someone doing the cognitive work of responding to a survey question. Implicit attitudes are the sorts of “visceral” associations that individuals hold and form before higher-order psychological processes kick in to shape their expression (Devos, 2008). Explicit attitudes, on the other hand, are the associations that individuals are more likely to self-report or cognitively access when asked about them. While implicit attitudes are sometimes called “unconscious biases,” some scholars suggest that while “people sometimes lack conscious
awareness of the origin of their attitudes…there is no evidence that people lack conscious awareness of indirectly assessed attitudes per se” (Gawronski et al., 2006; emphasis added). Among Americans, implicit attitudes have been found to be both tightly linked to emotion (Banks and Hicks, 2016) and particularly manipulable by political elites (Ryan, 2017). Implicit and explicit attitudes themselves are routinely correlated, but can both diverge (Petty et al., 2006) and be stimulated independently (Ryan and Krupnikov, 2021). Implicit attitudes tend to be indirectly accessed, perhaps owing to their “beneath the surface” nature, while explicit attitudes are much more likely to be observed through survey research and measured directly. Although not without its scientific (Azar, 2008) and cultural (Jost, 2018) critiques, the Harvard University–based Implicit Association Test (IAT) is among the most popularly well-known measures of implicit attitudes.
The state of having no predisposition about a topic or question has been described as encompassing “nonattitudes” for decades (Converse, 1964). Respondent indifference and fabricated affect are key sources of the nonattitude; common responses reflecting this range from statements of “no opinion” to random responses chosen simply for the sake of responding (and whatever benefits the respondent expects from it). The empirical possibility of nonattitudes complicates the complete enterprise of using surveys to uncover objective “truths.” If survey questions behave “more like a stimulus to form an opinion or judgment rather than a means of identifying already existing opinions or judgments” (Westerhof, 1994: 13), then the survey itself is not uncovering truth but rather (co-)creating it.
As respondents tend to provide the best answer that they can, given the current context in which they find themselves (Bond et al., 2020), even a purposefully misleading survey response can be a reliable indicator of how the respondent may view the question, the survey, the surveyor, or the surrounding circumstances. Best practices in engaged political science research suggest a very close reading of the concept of falsity when conducting human subjects research (Davenport and Moore 2015): When a respondent provides an answer that is knowingly and purposefully deceptive, we may cautiously consider that answer to be untrue. Cautious consideration is ideal because the empirical identification of deceptive or otherwise strategic behavior in survey (or interview) response is notoriously difficult. Even still, analysts ought to develop working hypotheses about why strategic behavior might be useful in a given circumstance.
Returning to the Parker and Blum research on MAGA attitudes, the authors provide the methodological note that even though they “recruited participants using Facebook ads that targeted users who listed ‘make America great again’ (MAGA) as an interest…we cannot rule out the possibility that some users might be following MAGA as a sort of casual opposition research.”3 Besides highlighting the fact of multiple plausible attitude-based explanations for the single behavior of following a given Facebook interest, this disclosure also hints at the possibility that some faux supporters might provide intentionally misleading or untrue answers to the survey. The authors note further that “in anticipation of this, we included several metrics on the survey to distinguish ‘real’ MAGA followers from opposition researchers. These included questions about party identification, group membership, and voting patterns.”4
All who review and plan to use any data compilations ought to scan them for systematic silences that themselves could explain patterns in how questions might be answered. Incomplete responses, partial responses, and nonresponses such as skipped questions and “don’t know” or “prefer not to say” answers can still be informative for analysts. In examining the correlates of “don’t know” answers and question refusals in over 200 surveys, for example, Shoemaker and her colleagues (2002) suggest that while both responses can be partly explained by both the sensitivity of the topic and the cognitive effort required to answer a question, “don’t know” responses are primarily more sensitive to variation in cognitive effort. Analysts must take care not to conflate survey responses that indicate a lack of information or uncertainty about the respondent’s position (such as “don’t know”) with evidence of indifference (“no opinion”).
While simply acknowledging that partial responses and nonresponses contain valuable information can be a useful step,5 exploring that information is likely to require substantial follow-up and additional time. Consider,
___________________
3 This appears in the methodology statement related to the MAGA study posted by Parker and Bloom on https://sites.uw.edu/magastudy/methodology/.
4 https://sites.uw.edu/magastudy/methodology/.
5 The question of whether “no opinion” responses ought to be classed as partial responses, nonresponses, or indicators of nonattitudes remains somewhat unresolved. In a number of household survey experiments with American respondents, Krosnick and his colleagues (2002: 371) found that “[t]he quality of attitude reports obtained (as measured by over-time consistency and responsiveness to a question manipulation) was not compromised by the omission of no-opinion options” and that the no-opinion option may in fact degrade data quality by providing respondents with a means for avoiding the cognitive work inherent in answering a given question.
for example, the results of a Gallup survey (Gallup, 2011) of Yemenis fielded during the early months of Yemen’s 2011–2012 pro-democracy uprising. Titled “Confidence in Government Divides Yemenis,” the report notes that while the percentage of respondents with confidence in government had fallen from previous survey waves in 2009 and 2010, a change accompanied by a rise in the percentage of respondents with no confidence over the same period, the percentage of Yemenis who did not answer the question had also steadily gone up from 3 percent in 2009 to 9 percent in 2010 and 15 percent in 2011.
There is no further investigation of that group, although it may be the one with the best potential of illustrating the possible dangers of openly stating an opinion, the volatility of opinion on the matter among the individuals surveyed or the utility of survey participation during an episode of open conflict with the government in question.
A naïve interpretation of these data might suggest only an increase in the number of people with nonattitudes or a decline in the number of people with strong attitudes. An analyst taking context seriously would note the precariousness of face-to-face interviews about such a consequential and contentious issue with individuals as young as 15 years old (see the survey methods section of the report) and perhaps revise their confidence in all three apparent trends with this newly-nuanced take on what questions a given Yemeni might have answered about that particular topic at that particular time.
Ethically questionable or otherwise harmful survey research is rarely, if ever, scientifically or societally defensible. All individuals attempting to gather or analyze attitude data should at minimum endeavor to do no harm. Respect for persons, beneficence, and justice can and should be as widely applicable in the intelligence context as it is in the social science world such that the safety and well-being of any research participant anywhere is treated with the same reverence as the IC’s commitment to serving the interests of the American public.
The ethical dilemmas at the core of intelligence work are not resolvable by addressing only the technical aspects of a given instrument or measure. Whether an analyst should use data collected in an unethical manner, or data that cannot be transparently analyzed with methods that follow the Belmont Code principles of human subject treatment is itself a serious ethical question. Furthermore, it is clear that violations of ethical responsibility produce “bad” data, and as the saying goes, “garbage in, garbage out.” It is unwise to trust the veracity and completeness of attitude data from respondents who have been unduly and disproportionately harmed
in the collection of data or data from others who might resonate with or anticipate similar experiences. Respondents who are unduly and disproportionately harmed through conclusions drawn on the basis of data about their attitudes may have reason to lobby others against future participation, thus damaging the possibility of continued learning. Analysts who prioritize ends over (unethical) means in the gathering of data may be setting themselves up for faulty and inconsistent analytical conclusions.
While individual analysts will always need to rely heavily on their personal ethical codes when engaging with data on political attitudes, strong statements of institutional guidance would help to normalize and standardize norms and practices in the IC writ large. The American Medical Association, for example, recently opined that only “in the rare instances when ethically tainted data have been validated by rigorous scientific analysis, are the only data of such nature available, and human lives would certainly be lost without the knowledge obtained from the data, it may be permissible to use or publish findings from unethical experiments” (American Medical Association, n.d.). Even then, it is the researcher’s responsibility to transparently describe the ethical shortcomings of the data and to provide compelling justification for its use. The American Association for Public Opinion Research (AAPOR) maintains a Code of Professional Ethics and Practices that is meant to clarify the ethical responsibilities of public opinion and survey research professionals working in a variety of institutional settings.6 The AAPOR also maintains a list of Condemned Survey Practices,7 which highlight what the professional survey research community deems to be unethical practices in the collection and reporting of such data. While all AAPOR members sign onto this code, the responsibility for adhering to and applying these norms remains squarely on the professional’s shoulders.
The Principles of Professional Ethics for the Intelligence Community8 and the Principles of Artificial Intelligence Ethics9 as laid out by the Office of the Director of National Intelligence offers broad guidance as to seven principles—mission, truth, lawfulness, integrity, stewardship, excellence, and diversity—that ought to govern all work done by the IC. Collectively, though, these fall short of providing detailed instruction about how to engage with data that results from an unethical collection. This is particularly problematic for the analysis of attitude data. In light of its limited observability, pressures to gather as much readily analyzable data as possible as efficiently as possible can push even careful analysts to make decisions that
___________________
6 https://www.aapor.org/Standards-Ethics.aspx.
7 https://www.aapor.org/Standards-Ethics/Survey-Practices-that-AAPOR-Condemns.aspx.
8 https://www.dni.gov/files/documents/CLPO/Principles%20of%20Professional%20Ethics%20for%20the%20IC.pdf.
9 https://www.dni.gov/files/ODNI/documents/Principles_of_AI_Ethics_for_the_Intelligence_Community.pdf.
jeopardize the ethical soundness of their work (Bond et al., 2020). When the analyst is aware of a systematic ethical or technical irregularity in survey execution or data collection, and there is no opportunity to redeploy the instrument, the best practice is to incorporate models of the bias into any resulting analysis, but the source and nature of the irregularity can profoundly condition the set of potential response options. Even a basic mapping of the research process can help clarify the stakeholders, costs, and benefits of survey participation for all parties involved.
This section describes eight specific ethical and technical challenges that substantially test the internal and external validity of attitudinal survey research. Two are based in structural aspects of gathering and analysis, four are based in situational specifics, and the final two are related to instrument design. Each suggests empirical indicators that may be employed to assess whether the data (to be) collected may be systematically incomplete or irregular.10 All are accompanied by a few strategies for helping analysts lean into the relational aspects of survey research in order to credibly gather, analyze, and report information about individual attitudes in light of the challenges. Table 3C-1 presents a list of the challenges to be addressed in this section.
Knowledge of and responses to these challenges are central to assessing the quality and propriety of any attempt to collect or analyze
TABLE 3C-1 Structural, Situational, and Instrumental Challenges in True Attitude Observation and Measurement
5.1 Structural Challenges
|
5.2 Situational Challenges
|
5.3 Instrumental Challenges
|
___________________
10 Many of these, and other concerns about self-report data, have generated large literatures of their own across many different social science disciplines.
such information. While I discuss them largely as discrete issues, these challenges and opportunities are in constant dialogue with each other. Tweaking one aspect is in fact almost certain to alter the impact and importance of others (Davenport and Moore, 2015). These concerns are as relevant for desk research as for work done in the field (Hoover Green and Cohen, 2021). Cases where they cannot be avoided at all may be good examples of survey projects that ought not proceed or data that is ill-suited for analysis.
Unacknowledged deception on the researcher’s part is as problematic for any data collection or analysis as unrevealed deception by a respondent. Purposeful misrepresentations about the scope, justification, or intended uses of an individual’s survey participation, whether for reasons of scientific validity or political expediency, are coercive insofar as they rob potential participants of their right to make a fully informed choice about whether, how, and how much to participate in a given survey project. Informed consent hinges on the respondent’s understanding of their own ability to say no to any or all parts of the research at any time in the research process. Survey designers, enumerators, and data end users alike are responsible for ensuring that the costs and risks of participation have been fully communicated to and understood by respondents and that respondents have not been misled or coerced into their participation. Fully informed consent is also best safeguarded when treated as an ongoing relationship between question askers and answerers rather than as an unchanging state of affairs.
It is not trivial that the general practice of intelligence gathering and analysis is itself ethically complicated, owing primarily to its reliance on surveillance as a primary methodological tool. Data gathering by surveillance is by definition non-neutral and often opaque, intrusive, and done in secret. Privacy, for example, is one of the basic rights to which human research subjects are entitled, yet intelligence gathering may involve the collection of information intended to yield insights about individuals’ private thoughts and positions in a manner that is surreptitious and does not explicitly call out the actual intention of the data collection effort. In other words, information about individuals’ private thoughts and positions may be collected without their explicit willingness to be asked about such thoughts and positions. Driscoll and Schuster (2017) detail the perverse incentives that lead some researchers to design their work in such a way that specifically courts (rather than simply appeals to) the intelligence community. The authors also note that in some circles, methods of careful
and methodical ethnography can themselves be “confused with spycraft” (Driscoll and Schuster, 2017: 411).11
The reality of geopolitical interests that have at times overshadowed humanitarian responsibilities is relevant here. Transparency and full information are particularly salient ethical considerations in the analysis of attitude data because opinions and attitudes about the United States as a political project will inevitably influence the reception of its efforts.
Sensitivity to power dynamics often leads to greater transparency about why certain questions are asked of and answered by certain individuals. The esteem that respondents have for the survey commissioner, for example, can encourage participants to try to “help” by providing answers that they think will support their questioner’s presumed goals. Alternatively, it can also encourage respondents to attempt to “sabotage” the study by submitting purposefully untrue or misleading answers. Attempts to rebalance power differentials through offering payment or other compensation for research participation must be carefully paired with a transparent and full consent process to avoid the appearance or effect of soliciting certain answers or coercing individuals in economic need. In any case, respondents who find themselves in a disempowered position relative to their questioners may offer distorted information to maintain or increase their own perceived safety or security.
Efforts to neutralize power sensitivities by matching enumerator and respondent characteristics are commonplace (Lupu and Mitchelitch, 2018), but it is often unclear which aspects of respondent or enumerator identity are most likely to have a practical effect on responses or overall reception (Blaydes and Gillum, 2013; Davenport, 2013). Particularly in fragile and low-resource environments, a policy of using local rather than American enumerators for surveys done in foreign countries can open up the possibility of shunting the risks and costs associated with the work onto individuals with more precarious circumstances, leading to the exploitation and unnecessary endangerment of more vulnerable research partners (Mwambari, 2019). Even area experts asked to make judgment calls on data based on their professional sensibilities should be supported in examining their own explicit and implicit attitudes and the origin of those sensibilities as a necessary part of the research process.
___________________
11 For more on the relationship between technical and ethical considerations in ethnography, see Schatz (2009), as well as Procter and Spector (2019).
Context fragility or volatility is a function of the individual respondent’s both perceived and actual ability to manipulate their environment in predictable and reliable ways in their own favor and in relation to others. Such conditions may also produce systematic distortions in how respondents react to being asked about their beliefs and preferences. While the possibility of partial or incomplete attitude data may well be a function of the stability of the respondent’s circumstances, extant research is not clear on the direct effects of unstable or rapidly shifting risk conditions on the stability or strength of individual attitudes.
Generally speaking, however, field data quality is known to be unpredictable in active conflict or contestation zones and in moments when the research participants are facing uncertain or rapidly shifting security conditions (Wood, 2006). It is equally plausible that uncertain environments cause some individuals to hold firmer to their beliefs but encourage others to question or revise their attitudes. It is also plausible that active conflict might play a role in the shifting of individual attitudes. Braithwaite and colleagues (2019: 476) noted, for example, that “During the 2006 Israeli-Hezbollah war, Israeli citizens playing ultimatum and trust games were significantly more willing to administer costly punishments or to reward other players, respectively, than during peacetime (Gneezy and Fessler, 2011).” In other words, during wartime—ostensibly when violence is more salient—the expectation that players would act ‘fairly’ increased and when it did not, other players were more willing to incur the costs of punishing unfairness and rewarding fairness.”
Attempts to understand what people think that do not take into account the precariousness of basic needs satisfaction in resource-poor environments or the scope of threats to physical security in socially fragmented communities may do more harm than good for respondents and analysts. Many have acknowledged, for example, that active armed conflict in foreign countries can produce “permissive environments in which researchers can engage in conduct that would be considered deeply problematic at home” (Cronin-Furman and Lake, 2018: 607). The social instability of groups experiencing fragile circumstances, structural violence, or physical victimization in both foreign and domestic environments can similarly open the door for overzealous and harmful research methodologies.
Finally, Horn (2011) offers a wealth of practical advice for gathering information about public opinion in nondemocratic contexts, which includes looking to events data as indicating underlying attitudes (Tilly, 1983). While this approach has its merits, predispositions are not precipi-
tants, and the translation of attitudes to particularly contentious political behavior is neither one-to-one nor necessarily linear.
Finally, for the above reasons and others, participation in survey research can carry the potential for psychological, physical, social, and emotional harm. As AI, Big Data, and imputation methods for “extracting” attitude information from social media and other networks proliferate, the limits of privacy and confidentiality in even anonymized data are under stress. Furthermore, where legal responsibility for the safety and security of survey respondents and their data has become murkier, incentives for ethically questionable research practices tend to grow.
Individuals can be (re)traumatized through requests for information that may jeopardize their security or experience undue negative consequences for their responses (or lack of response). Research that introduces trauma recall (e.g., Canetti et al., 2017) or emotional distress (e.g., Young, 2019) as an experimental prime or treatment particularly in a targeted, disenfranchised, or vulnerable population dangerously stresses the limits of ethical conduct. The danger is particularly acute if such work is carried out by individuals without clinical expertise. It is impossible to fully calculate the actual scientific or societal value of trauma as part of a research experience until the research is complete. This itself is a significant red flag that demands detailed consideration of the trade-offs inherent in asking and answering any given survey question. The responsibility to prevent harm to those executing and administering a survey is also critical, though often overlooked.
Although the elements that constitute a sensitive topic can be highly variable across contexts, the social desirability of certain attitudes about taboo, stigmatized, or highly emotional issues can influence respondent willingness and ability to provide truthful responses. On balance, when the stakes of any given response are high, the stakes of responding at all are likely quite high as well. Social desirability bias can depress that willingness when a truthful response could encourage social sanction, and it could amplify that willingness if respondents instead expect some selective benefit from a particular answer. A priori expectations about the sensitivity of certain language, topics, or responses ought to be informed by a nuanced understanding of the respondents’ cultural, political, and security context. Analysts with extensive knowledge of community norms around the topics of interest will have a significant advantage here, as they will be
more attuned to understanding which types of questioning might trigger such dynamics.
Ethnographers and other qualitative researchers tend to prioritize building trust and meaningful rapport through repeated interaction and mutual curiosity with their survey participants to gather true attitudes about all topics, including sensitive ones. Relational interviewing as a complement or an alternative to quantitative survey administration can give more insight into the way that respondents not only see the form and content of the survey, but also their relationship to the questioner(s) (which itself may be a point of sensitivity) (Brosius et al., 2021; Fujii, 2017). Prioritizing these relationships in a transparent, fully informed, and consenting manner can be of great value to minimizing the possibility of untruthful engagement. It can also militate against the possibility of doing known or inadvertent harm.
Quantitative academic research on political attitudes in conflict-affected places has increasingly come to use experiments for mitigating the effects of social desirability bias and eliciting (or triangulating) truthful responses to survey-based attitude inquiries. To examine attitudes toward voting rights among illiterate people in Lebanon, for example, Corstange (2009) used the list experiment, which bundles sensitive and anodyne questions to ease individuals’ retrieval of delicate information and randomly assigns respondents to receive the sensitive question in bundled or unbundled fashion. Blair, Imai, and Lyall (2014) have used a combination of list and endorsement experiments to ascertain patterns of support for the NATO-led International Security Assistance Force in Pashtun communities of likely Taliban supporters.
Others would suggest randomized response (Gingerich, 2010) or policing tools like the cognitive interview to deal with respondent reticence or the inability of researchers to access respondent attitudes.12
Meaning-making is difficult to observe much less measure in cross-sectional snapshots (St. Pierre, 2013). Longitudinal data collections that either sample repeatedly from the same pool of respondents or engage the same respondents in regular waves over time would be optimal for assessing how individuals view their worlds and their roles in those worlds. The degrees of access and resources necessary to conduct such a study are often impractical or unavailable in many circumstances, though. When even well-designed longitudinal survey studies can reveal inconsistent responses from participants, it can be nearly impossible to determine which response is the “true” answer (Mercklé and Octobre, 2015). In snapshot studies, addi-
___________________
12 For more on cognitive interviewing, see Memon and Bull (1991).
tional time-oriented follow-up questions (“How long would you say you’ve thought that way?,” for instance) can open the door for the respondent to reflect on and offer their own useful context around their rootedness in their convictions (Fujii, 2017). Analysts should take caution before extrapolating the results of a survey done in one time period to another, though. This is particularly true during periods of unrest or heightened uncertainty.
A major challenge to collecting complete and truthful attitude data from a nonrandom sample is rooted in a respondent’s history of exposure to the survey instrument, the surveyors, or the process of survey research writ large. Participant socialization and curated narratives can be an insidious by-product of researchers’ increased access to particular groups of people and places where attitudinal data may be particularly valuable or previously difficult to collect (Parkinson, 2021). In some places, respondents have generated “scripts” that they have learned to draw from when asked to repeatedly answer common questions posed by researchers (De Juan and Koos, 2021; Krystalli, 2021). Overexposure to survey research also harms respondents, often leading to research fatigue (Mwambari, 2019). In situations where potential respondents do not find refusal to participate to be an option, low-quality responses may be all the more likely (Clark, 2008). Sukarieh and Tannock (2012: 494) summarized the effects of high-volume research on residents of the Shatila Palestinian refugee camp in Beirut (“probably one of the most heavily researched neighbourhoods anywhere, and certainly within the Palestinian diaspora”) as destabilizing to social relationships and disappointing in its overpromises of change.
Heavy research exposure can also produce both observation and measurement bias in attitude data through influencing the attentiveness and care that respondents apply to the questions at hand. On one hand, Alvarez and his colleagues (2019: 145) have found that respondent inattentiveness matters significantly to response quality and that “failing to properly account for it may lead to inaccurate estimates of the prevalence of key political attitudes and behaviors of both sensitive and more prosaic nature.” On the other hand, Brown and Pope (2021: 416) note that the volume and quality of information in a “don’t know” response—particularly the possibility of using the response to “hide knowledge behind” it—may be uniquely sensitive to how a survey is administered. Specifically, they argue that survey data collected through Amazon’s Mechanical Turk may reflect respondents’ intense familiarity with answering surveys on the platform,
which itself conditions them against certain answers and steers them toward others.13
That said, we cannot observe the attitudes of individuals who are not surveyed, and convenience samples are often the most ethically, politically, and practically feasible. Still, remembering that context is an integral component of both attitude formation and reporting, analysts ought to use extreme caution when drawing in-context conclusions using data collected out of context. Gathering data from respondents located in authoritarian countries, for example, has its well-noted difficulties (e.g., Glasius et al., 2018), but using attitude data from migrants and expats to indicate public opinion in their home country can introduce substantial measurement error by way of exception fallacy. Focusing on the Cuban case, Roberts (1999: 245) noted that the series of Radio Marti–conducted surveys of Cuban public opinion that relied on recent migrants to the United States were fundamentally hamstrung, as “emigres represented the disillusioned and disenfranchised” and the relationship between their attitudes and the attitudes of Cubans still on the island was “anyone’s guess.” While a NORC-conducted, on-island poll of Cubans in 2017 (NORC, 2017) is reported to have found close to a 70 percent approval rate for closer U.S.-Cuban relations, this result is in a sample of 840 people drawn from of a population of roughly 11 million. For drawing statistical inferences, this sample is appropriately and sufficiently large. For unpacking the possibility of misleading or untrue responses, and of attributing them to an entire population many orders of magnitude larger than the sample, the appropriateness of the sample size is much less clear. The following paragraph from an AP News story about the poll and its results demonstrates some of the substantive complexity of the respondents’ political realities: “Seventy-six percent said they had to be careful about expressing themselves freely. Over half of Cubans said they would move away from the country if given the chance. Of those, 70 percent said they would head to the United States, where many respondents said they had relatives” (Swanson and Weissenstein, 2017).
These issues notwithstanding, as the size of the reachable set of respondents grows, purposive stratification can be a useful strategy for maximizing representative variation in a nonrandom sample. Jäger (2017), for example, found that social media advertising (specifically, through Facebook) facilitated the collection of large samples of political activists in Germany and Thailand and that sample representativeness could be substantially improved with post-stratification. Unless stratification is done to specifically maximize the odds that the data would capture attitude truth-tellers either instead of or alongside other relevant dimensions, though, the likely effects
___________________
13 On the relative overexposure of MTurkers to survey research, see also Hitlin (2016).
of such a strategy on one’s ability to ascertain respondents’ true attitudes are not clear.14
When researchers and survey respondents are embedded in different social and political contexts, developing and deploying surveys with maximum cultural sensitivity can be particularly complicated. While it is best practice to offer respondents a survey questionnaire in the language with which they feel most comfortable (Behr and Shishido, 2016), limited awareness of important cultural cues and primes, clumsy translation, or lacking a shared language base for understanding the literal text of a questionnaire can easily sow confusion, misunderstanding, or uncertainty among respondents and analysts alike. Such incongruities can greatly complicate the collection and successful measurement of true attitude data, even among careful analysts. Furthermore, it is important to remember that reliance on native citizens or country/cultural experts for calibrating survey elements may not guarantee that all possible interpretations will (or can) be accounted for. For example, Bischoping and Schuman (1992: 331) concluded from an analysis of multiple preelection surveys and a survey experiment that, in the run-up to the 1990 Nicaraguan election, “the failure of some [election] polls and the success of others lies in an unusual interaction between respondents’ vote intentions and the perceived partisanship of a poll”. According to the authors, the key experimental treatment—the color and lettering on the pen used by survey enumerators to record respondents’ answers, which could be perceived as aligning the enumerator with the Sandinista party (red and black, lettered DANIEL PRESIDENTE), the UNO party (blue and white, lettered UNO), or neutral (red and white, no lettering)—was developed in collaboration with two Nicaraguan political insiders; on this basis, the authors were sufficiently convinced that the colors and letters used were clear signals of either bias or neutrality (Bischoping and Schuman 1994: 495). This assumption may not have been ironclad, however. In a prominent critique, Anderson (1994: 490) demonstrates that the “neutral” red and white color combination could have easily been understood instead as indicating a pro-FSLN bias, thus simply replicating one of the other treatment conditions. Further, Anderson notes that this may have had significant implications for the original study results, as “both [the ‘pro-FSLN’ and ‘neutral’] types of interviews collected data predicting a Sandinista victory”.
___________________
14 It is worth considering whether this sort of information is even observable prior to data collection attempt. If not, one might see even more clearly how difficult it can be to adjust a survey research design for these sorts of concerns.
The specific survey items also directly affect the propensity of respondents to provide whole and truthful responses. The inclusion of a “no opinion”/”don’t care” option (or a “not at all” option for scale responses), for example, can be a good way of avoiding (the appearance of) eliminating the possibility of nonattitudes. Open-ended questions prioritize the respondent’s viewpoint over the researcher’s and provide research participants with the opportunity to offer information about their attitudes and opinions in ways that the survey administrators might not have anticipated. While the open-ended question allows for more nuanced data gathering, it still reduces the control that researchers have over the speed and tone of the engagement.
The Parker and Blum MAGA attitudes study referenced earlier highlights one of the biggest challenges to remote and online survey enumeration: verifying that respondents are who they say they are. The proliferation of “bots” and other fake social media accounts, along with organized schemes that allow individuals to serially answer surveys for pay all militate against the observation of true attitudes and can significantly contribute to errors in measurement.15
Errors in comprehension and access in online surveys may correlate negatively with a respondent’s computer literacy, while others may hesitate to provide full and truthful responses to face-to-face, hand-written surveys for which there may be an incriminating paper trail.
Even with careful question wording, sequencing, and presentation, there is of course a potential for different respondents to vary in their interpretations and experiences of all parts of a survey. Focus group discussions and consultation with area experts can be very useful for developing question batteries. These are opportunities to become further attuned to cultural nuances and connotative implications that might otherwise not occur to the survey designer or enumerator. Post facto focus groups can be helpful for those who are interested in analyzing survey data that has already been collected but do not have knowledge of or access to such pre-implementation instrument reviews.
While these challenges certainly highlight the difficulties inherent in collecting and analyzing true attitude data, they are not fundamentally insurmountable. Cumulatively, extant research suggests a four-pronged
___________________
15 In a randomized survey experiment involving American adults, Fernandez Lynch and her colleagues (2019) found that payment for online survey participation “may be associated with deception about eligibility for study participation, but higher payment may not lead to higher rates of deception.”
approach to assessing one’s confidence level in the accuracy and ethical propriety of any attitude data of interest.
The discussion above makes clear that best practices in responsive research design, implementation, and analysis encourage all to investigate the data generating process to the greatest extent possible. In the process, analysts ought to aim to maximize contextual knowledge of both the respondents and the data themselves. Whether dealing with a new survey or already collected data, it is critical that analysts gather as much information as possible about the conduct of the survey before attempting to analyze the responses. Familiarizing oneself with the survey methodology and rollout ought to clarify how thoroughly the above challenges were dealt with before the data were fully assembled. With this information, analysts may more credibly and defensibly hypothesize about the direction or strength of (potential) biases in the data. Relatedly, it is advisable to minimize reliance on opaque sources. Responses that cannot be practically or ethically solicited without the use of deception, subterfuge, or secrecy are inherently vulnerable to the eight challenges outlined earlier and to others as well. Source opacity will also militate against gathering as much background knowledge as possible about the data and its collection. Finally, whether or not the raw data or its collectors are available for further investigation, analysts ought to prioritize analysis of survey and question metadata. As noted above, survey responses can also provide information about how the respondent relates to the asking and sometimes to the asker, along with reflecting answers to the question directly asked. Metadata can be particularly helpful in unpacking responses or trends that an analyst finds surprising or counterintuitive.
Survey metadata are attributes of the data itself that provide background information about survey conduct, content, and responses. Metadata are collected in conjunction with the item responses, but often not as the focus of study. Still, they can often be aggregated at meaningful intervals for a comprehensive picture of common patterns in the survey’s administration or results. As such, survey metadata can be particularly useful to the first three general mitigation strategies.
In the qualitative context, Lee Ann Fujii has argued that “the spoken and unspoken expressions about people’s interior thoughts and feelings, which they do not always articulate in their stories or responses to interview questions…are as valuable as the testimonies themselves because they indicate how the current social and political landscape is shaping what people might say to a researcher” (2010: 232). From this perspective, survey metadata often includes attributes like tone and unasked questions. In quanti-
tative work, metadata often includes variables indicating the number of unanswered questions, how long it took a respondent to complete specific items, how many times and when the survey was accessed or administered before completion, and the distribution of incomplete responses such as “don’t know” or “prefer not to say” can be recorded for each individual question. Even purposeful deception sheds some light on the underlying political value of answering any given question or participating in any given survey. Analysts may also explore whether respondents skipped and returned to some questions more than others or changed their responses multiple times as indicators of distress, uncertainty, engagement, or focus.16
Along with these items, enumerator training can include guidance about how to take detailed notes about any difficulties they may have observed or experienced before, during, or after administering the survey. Such qualitative and interpretive metadata can be particularly valuable to the analyst wondering what a respondent is communicating with a given reaction or (non)response, and can also be as enlightening, if not more enlightening, as quantitative item response metrics. Metadata can also help analysts uncover and remediate faked data. Goldstein (2014) describes navigating this problem when survey enumerators are found to have fabricated survey data and suggests that looking for systematic outliers by enumerator (such as “heaping around easy numbers”) has been fruitful for others in past work. Goldstein also describes applications of Benford’s Law, a forensic data accounting principle often used to investigate quantitative data for human interference, as a useful option. In the context of data on the sensitive topic of government coercion and violence, Bond and colleagues (2021) provide an application of Benford’s Law to state-supplied event count data. Mebane (2016) explores Benford’s Law and other data forensics tools where survey data on attitudes is available in quantitative format such as exit polls and election results.
For many of the concerns and realities outlined in this paper, there is no easy “fix” that will neutralize their effects on the process of trying to
___________________
16 While adding redundant questions to a survey to see if the wording of a question or repeated questioning might change one’s answer is common in academic research, Schwarz and colleagues (2020) failed to find evidence that survey respondents’ memory of earlier answers systematically declines as the survey progresses, even when given a memory interference task between the initial and repeated questions. They take this as “a serious challenge to using repeated questions within the same survey.” Response consistency cannot distinguish between truthful and false answers, nor between strategic and nonstrategic answers. It can, however, provide some insight into attitudinal complexity.
understand what people think, and why. Cultivating an awareness of the intertwined nature of technical and ethical challenges to parsing political attitudes is central to how analysts can, do, and ought to assess data quality and draw reliable inferences while serving the American public’s best interest.
Alvarez, R. M., L.R. Atkeson, I. Levin, and L. Li. 2019. “Paying Attention to Inattentive Survey Respondents.” Political Analysis 27(2):145–162. https://www.cambridge.org/core/journals/political-analysis/article/abs/paying-attention-to-inattentive-survey-respondents/BEDA4CF3245489645859E7E6B022E75A.
Amaya, A., R. Bach, F. Kreuter, and F. Keusch. 2020. “Measuring the Strength of Attitudes in Social Media Data.” In Big Data Meets Survey Science: A Collection of Innovative Methods edited by C. Hill, P.P. Biemer, T.D. Buskirk, L. Japec, A. Kirchner, S. Kolenikov, and L.E. Lyberg. Hoboken, NJ: Wiley Online Library. DOI: 10.1002/9781118976357.ch5.
American Medical Association. n.d. “Release of Data from Unethical Experiments.” Ethics guideline, American Medical Association, Chicago. https://www.ama-assn.org/delivering-care/ethics/release-data-unethical-experiments.
Anderson, L. 1994. “Neutrality and Bias in the 1990 Nicaraguan Preelection Polls: A Comment on Bischoping and Schuman.” American Journal of Political Science 38(2):486–494.
Azar, B. 2008. “IAT: Fad or Fabulous?” Monitor on Psychology 39(7):44. https://www.apa.org/monitor/2008/07-08/psychometric.
Banks, A.J. 2014. Anger and Racial Politics: The Emotional Foundation of Racial Attitudes in America. Cambridge, United Kingdom: Cambridge University Press.
Banks, A.J., and H.M. Hicks. 2016. “Fear and Implicit Racism: Whites’ Support for Voter ID Laws.” Political Psychology 37(5):641–658. DOI: 10.1111/pops.12292.
Behr, D., and K. Shishido. 2016. “The Translation of Measurement Instruments for Cross-Cultural Surveys.” In The SAGE Handbook of Survey Methodology edited by C. Wolf. Los Angeles: SAGE Publications. DOI: 10.4135/9781473957893.
Bischoping, K., and H. Schuman. 1992. “Pens and Polls in Nicaragua: An Analysis of the 1990 Preelection Surveys.” American Journal of Political Science 36(2):331–350.
Bischoping, K., and H. Schuman. 1994. “Pens, Polls, and Theories: The 1990 Nicaraguan Election Revisited: A Reply to Anderson.” American Journal of Political Science 38(2): 495–499.
Blair, G., K. Imai, and J. Lyall. 2014. “Comparing and Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58(4): 1043–1063. DOI: 10.1111/ajps.12086.
Blaydes, L., and R.M. Gillum. 2013. “Religiosity of Interviewer Effects: Assessing the Impact of Veiled Enumerators on Survey Response in Egypt.” Politics and Religion 6(3):459–482. DOI: 10.1017/S1755048312000557.
Blee, K.M., and A. Currier. 2011. “Ethics Beyond the IRB: An Introductory Essay.” Qualitative Sociology 34:401–413. DOI: 10.1007/s11133-011-9195-z.
Blum, R.M., and C.S. Parker. 2021. “Panel Study of the MAGA Movement.” Polling survey, University of Washington, Seattle, WA. https://sites.uw.edu/magastudy/.
Bond, K.D. 2018. “Reflexivity and Revelation.” Qualitative and Mixed Methods Research 16(1):45–47.
Bond, K.D., M. Lake, and S.E. Parkinson. 2020. “Lessons from Conflict Studies on Research during the Coronavirus Pandemic.” Items (blog), July 2. https://items.ssrc.org/covid-19-and-the-social-sciences/social-research-and-insecurity/lessons-from-conflict-studies-on-research-during-the-coronavirus-pandemic/.
Bond, K.D., C.R. Conrad, D. Moses, and J.W. Simmons. 2021. “Detecting Anomalies in Data on Government Violence.” Political Science Research and Methods First View: 1–8. DOI: 10.1017/psrm.2021.40.
Braithwaite, A., T.S. Chu, J. Curtis, and F. Ghosn. 2019. “Violence and the Perception of Risk Associated with Hosting Refugees.” Public Choice 178:473–492.
Brosius, A., M. Hameleers, and T.G.L.A. van der Meer. 2021. “Can We Trust Measures of Trust? A Comparison of Results from Open and Closed Questions.” Quality & Quantity. DOI: 10.1007/s11135-021-01250-3.
Brown, A.R., and J.C. Pope. 2021. “Mechanical Turk and the ‘Don’t Know’ Option.” Political Science and Politics 54(3):416–420. DOI:10.1017/S1049096520001651.
Canetti, D., G. Hirschberger, C. Rapaport, J. Elad-Strenger, T. Ein-Dor, S. Rosenveig, T. Pyszczynski, and S. Hobfoll. 2017. “Collective Trauma from the Lab to the Real World: The Effects of the Holocaust on Contemporary Israeli Political Cognitions.” Political Psychology 39(1):3–21. DOI: 10.1111/pops.12384.
Clark, T. 2008. “‘We’re Over-Researched Here!’: Exploring Accounts of Research Fatigue within Qualitative Research Engagements.” Sociology 42(5):953–970. DOI: 10.1177/0038038508094573.
Converse, P.E. 2006. The nature of belief systems in mass publics (1964), Critical Review, 18:1-3, 1-74, DOI: 10.1080/08913810608443650.
Corstange, D. 2009. “Sensitive Questions, Truthful Answers? Modeling the List Experiment with LISTIT.” Political Analysis 17(1):45–63. DOI: 10.1093/pan/mpn013.
Cronin-Furman, K., and M. Lake. 2018. Ethics Abroad: Fieldwork in Fragile and Violent Contexts. PS: Political Science & Politics, 51(3), 607-614. doi:10.1017/S1049096518000379.
Davenport, C. 2013. “Researching While Black: Why Conflict Research Needs More African-Americans (Maybe).” Political Violence at a Glance (blog). https://politicalviolenceataglance.org/2013/04/10/researching-while-black-why-conflict-research-needs-more-african-americans-maybe/.
Davenport, C. and W.H. Moore. 2015. “Conflict Consortium: Standards & Best Practices for Observational Data.” https://conflictconsortium.weebly.com/uploads/1/8/3/5/18359923/cc-datastandardspractices7apr2015.pdf
De Juan, A., and C. Koos. 2021. Survey participation effects in conflict research. Journal of Peace Research, 58(4), 623–639. https://doi.org/10.1177/0022343320971034.
Devos, T. 2008. “Implicit Attitudes 101: Theoretical and Empirical Insights,” in Crano, W.D. and Prislin, R. (eds.), Attitudes and Attitude Change. New York: Psychology Press.
Driscoll, J., and C. Schuster. 2018. “Spies Like Us.” Ethnography 19(3):411–430.
Eisler, P., N. Parker, and J. Harte. 2018. “Trump Supporters’ Election Test: A Movement or a Moment.” Reuters, November 2. https://www.reuters.com/article/us-usa-election-trump-maga-insight/trump-supporters-election-test-a-movement-or-a-moment-idUSKCN1N710E.
Fernandez Lynch, H., S. Joffe, H. Thirumurthy, D. Xie, and E. Largent. 2019. “Association Between Financial Incentives and Participant Deception About Study Eligibility.” JAMA Network Open 2(1):e187355. DOI: 10.1001/jamanetworkopen.2018.7355.
Fujii, L.A. 2010. “Shades of Truth and Lies: Interpreting Testimonies of War and Violence.” Journal of Peace Research 47(2):231–241. DOI: 10.1177/0022343309353097.
Fujii, L.A. 2017. Interviewing in Social Science Research: A Relational Approach. Abingdonon-Thames, United Kingdom: Routledge.
Gallup. 2011. “Confidence in Government Divides Yemenis.” Survey report, Gallup, Washington, DC. https://news.gallup.com/poll/157052/confidence-government-divides-yemenis.aspx.
Gawronski, B., W. Hofmann, and C.J. Wilbur. 2006. “Are ‘Implicit’ Attitudes Subconscious?” Consciousness and Cognition 15:485–499. DOI: 10.1016/j.concog.2005.11.007.
Gingerich, D. 2010. Understanding Off-the-Books Politics: Conducting Inference on the Determinants of Sensitive Behavior with Randomized Response Surveys. Political Analysis, 18(3), 349-380. doi:10.1093/pan/mpq010.
Glasius, M., M. de Lange, J. Bartman, E. Dalmasso, A. Lv, A. Del Sordi, M. Michaelsen, and K. Ruijgrok. 2018. Research, Ethics and Risk in the Authoritarian Field. Basingstoke, United Kingdom: Palgrave Macmillan. DOI: 10.1007/978-3-319-68966-1.
Gneezy, A., and Fessler, D.M.T. 2011. Conflict, sticks and carrots: War increases prosocial punishments and rewards. Proceedings of the Royal Society of London B: Biological Sciences, 279(1727), 219–223.
Goldstein, M. 2014. “When Bad People Do Good Surveys.” World Bank Blogs: Development Impact (blog). https://blogs.worldbank.org/impactevaluations/when-bad-people-do-good-surveys.
Halman, L. 2007. “Political Values.” In The Oxford Handbook of Political Behavior edited by R.J. Dalton and H. Klingemann. Oxford, United Kingdom: Oxford University Press. DOI: 10.1093/oxfordhb/9780199270125.003.0016.
Hennessy, B. 1970. “A Headnote on the Existence and Study of Political Attitudes.” Social Science Quarterly 51(3):463–476.
Hitlin, P. 2016. “Research in the Crowdsourcing Age, a Case Study.” Pew Research Center, July 11. http://www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/.
Hoover Green, A., and D.K. Cohen. 2021. “Centering Human Subjects: The Ethics of ‘Desk Research’ on Political Violence.” Journal of Global Security Studies 6(2):ogaa029. DOI: 10.1093/jogss/ogaa029.
Horn, C. 2011. “Measuring Public Opinion under Political Repression.” American Diplomacy (blog), April. https://americandiplomacy.web.unc.edu/2011/04/measuring-public-opinion-under-political-repression/.
Jäger, K. 2017. “The Potential of Online Sampling for Studying Political Activists around the World and across Time.” Political Analysis 25(3):329–343. DOI: 10.1017/pan.2017.13.
Jost, J.T. 2018. “The IAT is Dead, Long Live the IAT: Context-Sensitive Measures of Implicit Attitudes Are Indispensable to Social and Political Psychology.” Current Directions in Psychological Science 28(1):10–19.
Krosnick, J.A., A.L. Holbrook, M.K. Berent, R.T. Carson, W.M. Hanemann, R.J. Kopp, R.C. Mitchell, S. Presser, P.A. Ruud, V.K. Smith, W.R. Moody, M.C. Green, and M. Conaway. 2002. “The Impact of ‘No Opinion’ Response Options on Data Quality: Nonattitude Reduction or an Invitation to Satisfice?” Public Opinion Quarterly 66(3): 371–403.
Krystalli, R.C. 2021. “Narrating Victimhood: Dilemmas and (In)Dignities.” International Feminist Journal of Politics 23(1):125–146. DOI: 10.1080/14616742.2020.1861961.
Lupu, N., and M. Mitchelitch. 2018. “Advances in Survey Methods for the Developing World.” Annual Review of Political Science 21:195–214. DOI:10.1146/annurev-polisci-052115-021432.
Mebane, Jr., W. 2016. “Election Forensics: Frauds Tests and Observation-level Frauds Probabilities.” Paper presented at the 2016 Annual Meeting of the Midwest Political Science Association, Chicago, April 7–10. Available at http://www-personal.umich.edu/~wmebane/mw16.pdf.
Memon, A., and R. Bull. 1991. “The Cognitive Interview: Its Origins, Empirical Support, Evaluation and Practical Implications.” Journal of Community & Applied Social Psychology 1(4):291–307. DOI: 10.1002/casp.2450010405.
Mercklé, P., and S. Octobre. 2015. “Do Survey Respondents Lie? Inconsistent Responses and the Biographical Illusion in a Longitudinal Survey on Adolescents’ Leisure Outings.” Revue française de sociologie 56:561–591. DOI: 10.3917/rfs.563.0561.
Mwambari, D. 2019. “Local Positionality in the Production of Knowledge in Northern Uganda.” International Journal of Qualitative Methods 18:1–12. DOI: 10.1177/1609406919864845.
NORC (National Opinion Research Center). 2017. “A Rare Look Inside Cuban Society: A New Survey of Cuban Public Opinion.” NORC, https://www.norc.org/PDFs/Survey%20of%20Cuban%20Opinion/NORC_Cuba_Report_2017_DTPv7r1.pdf.
Olteanu, A., C. Castillo, F. Diaz, and E. Kicman. 2019. “Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries.” Frontiers in Big Data 2(13):1–33. DOI: 10.3389/fdata.2019.00013.
Parkinson, S.E. 2021. “(Dis)Courtesy Bias: ‘Methodological Cognates,’ Data Validity, and Ethics in Violence-Adjacent Research.” Comparative Political Studies 55(3):420–450. DOI: 10.1177/00104140211024309.
Petty, R.E., Z.L. Tormala, P. Briñol, and W.B.G. Jarvis. 2006. “Implicit Ambivalence from Attitude Change: An Exploration of the PAST Model.” Journal of Personality and Social Psychology 90(1):21–41. DOI: 10.1037/0022-3514.90.1.21.
Procter, C., and B. Spector. 2019. “The New Ethnographer: Addressing Challenges in Contemporary Ethnographic Research.” Blog Series.
Roberts, C. 1999. “Measuring Cuban Public Opinion: Methodology.” Association for the Study of the Cuban Economy, November 30. https://www.ascecuba.org/asce_proceedings/measuring-cuban-public-opinion-methodology/.
Ryan, T.J. 2017. How Do Indifferent Voters Decide? The Political Importance of Implicit Attitudes. American Journal of Political Science, 61(4), 892–907. http://www.jstor.org/stable/26379534.
Ryan, T.J., and Y. Krupnikov. 2021. “Split Feelings: Understanding Implicit and Explicit Political Persuasion.” American Political Science Review 115(4):1424–1441. DOI: 10.1017/S0003055421000538.
Schatz, E. 2009. Political Ethnography: What Immersion Contributes to the Study of Power (pp. 119–142). Chicago & London: University of Chicago Press.
Schwarz, N. 2007. “Attitude Construction: Evaluation in Context.” Social Cognition 25:638–656. DOI: 10.1521/soco.2007.25.5.638.
Schwarz, H., M. Revilla, and W. Weber. 2020. “Memory Effects in Repeated Survey Questions: Reviving the Empirical Investigation of the Independent Measurements Assumption.” Survey Research Methods 14(3):325–344. DOI:10.18148/srm/2020.v14i3.7579.
Shoemaker, P.J., M. Eichholz, and E.A. Skewes. 2002. “Item Nonresponse: Distinguishing between Don’t Know and Refuse.” International Journal of Public Opinion Research 14(2):193–201. DOI: 10.1093/ijpor/14.2.193.
St. Pierre, E.A. 2013. “Post-Qualitative Research: The Critique and the Coming After,” in N.K. Denzin and Y.S. Lincoln (eds.), Collecting and Interpreting Qualitative Materials, 4th ed., Los Angeles/London/New Delhi: Sage Publications.
Swanson, E., and M. Weissenstein. 2017. “Rare Poll Finds Cuban Citizens Favor Better U.S. Relations.” Associated Press, March 21. https://apnews.com/article/47ab037c653f4d4a9ea3fe706486c65c.
Tilly, C. 1983. “Speaking Your Mind Without Elections, Surveys, or Social Movements.” Public Opinion Quarterly 47(4):461–478.
Van Dyke, V. 1962. “Values and Interests.” American Political Science Review 56(3):567–576.
Verplanken, B., and S. Orbell. 2022. “Attitudes, Habits, and Behavior Change.” Annual Review of Psychology 73:327–352. DOI: 10.1146/annurev-psych-020821-011744.
Westerhof, G.J. 1994. Statements and stories towards a new methodology of Attitude Research. Thesis Publ.
Wood, E.J. 2006. “The Ethical Challenges of Field Research in Conflict Zones.” Qualitative Sociology 29:373–386. DOI: 10.1007/s11133-006-9027-8.
Young, L.E. 2019. “The Psychology of State Repression: Fear and Dissent Decisions in Zimbabwe.” American Political Science Review 113(1):140–155. DOI: 10.1017/S000305541800076X.
This page intentionally left blank.