Page 1 of 1

Let's Focus on Research Transparency

Posted: Sat Oct 29, 2016 10:40 am
by pollack57
When it comes to human subjects research, it seems as if the most vexed DART question -- and the one to which commentators invariably gravitate -- is data access: Are researchers required to provide either (a) editors or (b) the public with direct access to raw data such as field notes or interview transcripts, either with or without the identity of the interview subjects?

The reason this question is vexed, it seems to me, is that it presents an apparent clash of absolutes: one the one hand, editors and readers may believe that they only way to be certain that interview data is not biased, or even fabricated, is to see the raw data themselves; while on the other hand researchers can rightly point to their ethical commitment, and their practical need, to protect the confidentiality of their subjects, many of whom (even elites) are vulnerable in some way to having their confidential statements revealed publicly (not to mention that our IRB-mandated consent agreements may legally preclude sharing such data with anyone outside the research team, including editors).

We can't avoid that question of data access, and I hope we continue to discuss it further -- but I'd like to shift the focus to what might be a more promising area for compromise and establishing best practice, and that's research transparency. In an excellent chapter (citation below), Erik Bleich and Robert Pekkanen have explored the question of transparency in interview research, asking how researchers could increase the transparency of their published research, while fully respecting legal and ethical commitments to human subjects. Their discussion is rich, and deserves to be widely read. Even if we cannot share field notes or (even redacted) transcripts for fear of violating commitments to our subjects, they argue, we can take systematic and explicit steps to increase our production and analytic transparency in ways that can reassure readers about the validity of our empirical claims.

With respect to production transparency, for example, we can indicate, in general terms, the process through which we identified and recruited our sample of interview subjects; how many subjects were interviewed; whether our requests were rejected by some potential subjects, and whether those rejections introduce bias into the sample; and more generally whether our sample is broadly representative in terms of the variables that might be thought relevant to our study. All of this would have to be done with great care, so as not to inadvertently "out" subjects to whom we have promised confidentiality, but in most cases the obstacles are not insurmountable.

With respect to analytic transparency, similarly, we may not be able to provide transcripts to prove that we are not cherry-picking quotes and evidence, but we certainly can indicate in general terms whether our quotes are representative of the sample as a whole, for example by noting that a particular view was expressed by 9 out of 12 respondents, or that a particular view was expressed most often by a particular category of respondent.

The advantage of focusing on research transparency, it seems to me, is it promises to improve the quality of our work, and the confidence of the readership in our findings, without posing a direct threat to the confidentiality of vulnerable interview subjects. Am I right about this? And if so, how can we follow Bleich and Pekkanen in identifying best practices that increase production and analytic transparency in ways that our consistent with our professional ethics and our commitments to vulnerable human subjects? I'm particularly interested in hearing first-hand accounts of how researchers attempt to strike this balance in their own work.

* Erik Bleich and Robert Pekkanen, How to Report Interview Data, in Layna Mosley, ed., INTERVIEW RESEARCH IN POLITICAL SCIENCE (Ithaca: Cornell University Press, 2013), 84-103.

Re: Let's Focus on Research Transparency

Posted: Tue Nov 29, 2016 5:21 pm
by Alice Kang
"The advantage of focusing on research transparency, it seems to me, is it promises to improve the quality of our work, and the confidence of the readership in our findings, without posing a direct threat to the confidentiality of vulnerable interview subjects. Am I right about this? And if so, how can we follow Bleich and Pekkanen in identifying best practices that increase production and analytic transparency in ways that our consistent with our professional ethics and our commitments to vulnerable human subjects? I'm particularly interested in hearing first-hand accounts of how researchers attempt to strike this balance in their own work."

I will give a first-hand account using my first book (a major revision of my dissertation), which is based on interviews along with participant observation and primary and secondary sources.

"With respect to production transparency, for example, we can indicate, in general terms, the process through which we identified and recruited our sample of interview subjects; how many subjects were interviewed; whether our requests were rejected by some potential subjects, and whether those rejections introduce bias into the sample; and more generally whether our sample is broadly representative in terms of the variables that might be thought relevant to our study. All of this would have to be done with great care, so as not to inadvertently "out" subjects to whom we have promised confidentiality, but in most cases the obstacles are not insurmountable."

Production transparency is something that was emphasized to my peers and me in graduate school. In my qualitative methods class and in going through the dissertation proposal, IRB, and external funding processes, I was pushed to clarify and document how I recruited interview subjects, estimate the number of subjects, keep track of who was turning requests down, and consider biases in the list of interviewees with regard to the variables of interest. I wrote this up for my book. The press asked me to move much of that material to a Research Appendix, although a good amount was left in the Introduction.

"With respect to analytic transparency, similarly, we may not be able to provide transcripts to prove that we are not cherry-picking quotes and evidence, but we certainly can indicate in general terms whether our quotes are representative of the sample as a whole, for example by noting that a particular view was expressed by 9 out of 12 respondents, or that a particular view was expressed most often by a particular category of respondent."

In my book, I did not always say whether quotations were representative of the sample as a whole because (1) I sought out specific individuals who were involved in writing or contesting draft legislation and (2) not everyone knew the same things or understood things the same way. Trained in graduate school to look for nuances and contradictions, I tried to point out in the book when interviewees disagreed or emphasized different things.

Having not read the Bleich and Pekkanen chapter, do the authors include reflexivity as a way of enhancing analytic transparency? Analytic transparency might/should include a discussion of how the researcher's position, funding, and training affect the questions that are being asked and how the answers are being interpreted, and so on. Interestingly, my press wanted me to be more reflective about my own position and training, something that was suggested by a reviewer (who was a historian by the way). To the best of my memory, no political science journal, editor, or reviewer has asked me to include a statement of reflexivity, which is, I believe, seen as valuable in other disciplines and would enhance transparency.

Re: Let's Focus on Research Transparency

Posted: Sat Dec 17, 2016 12:21 pm
by Guest
"With respect to production transparency, for example, we can indicate, in general terms, the process through which we identified and recruited our sample of interview subjects; how many subjects were interviewed; whether our requests were rejected by some potential subjects, and whether those rejections introduce bias into the sample; and more generally whether our sample is broadly representative in terms of the variables that might be thought relevant to our study. All of this would have to be done with great care, so as not to inadvertently "out" subjects to whom we have promised confidentiality, but in most cases the obstacles are not insurmountable."

"With respect to analytic transparency, similarly, we may not be able to provide transcripts to prove that we are not cherry-picking quotes and evidence, but we certainly can indicate in general terms whether our quotes are representative of the sample as a whole, for example by noting that a particular view was expressed by 9 out of 12 respondents, or that a particular view was expressed most often by a particular category of respondent."

I agree with these suggestions. In the past, I have done (and reported) some of these things, such as keeping track of how I recruited interview subjects, documenting how many subjects I interviewed and what their general positions/affiliations were, and providing some "statistics" about how representative quotes were of a sample as a whole if I was trying to make the case that the quotes were indeed representative of a sample as a whole. Some of these pieces of information are provided in the main text of a book/article and others in an appendix. I was not pushed during my training to be transparent in some of the other ways suggested in this thread/in the excellent Mosley volume, but I am being more attentive to them in my current research projects. I think that this type of research transparency, where feasible, will probably improve my work and give credibility to high-quality interview-based research more generally.

Re: Let's Focus on Research Transparency

Posted: Mon Dec 19, 2016 3:59 pm
by ebleich
Having not read the Bleich and Pekkanen chapter, do the authors include reflexivity as a way of enhancing analytic transparency? Analytic transparency might/should include a discussion of how the researcher's position, funding, and training affect the questions that are being asked and how the answers are being interpreted, and so on. Interestingly, my press wanted me to be more reflective about my own position and training, something that was suggested by a reviewer (who was a historian by the way). To the best of my memory, no political science journal, editor, or reviewer has asked me to include a statement of reflexivity, which is, I believe, seen as valuable in other disciplines and would enhance transparency.


Thanks for your thoughtful comment and your question, Alice. Just to answer quickly, we do not talk about reflexivity in our chapter. As you note, this is not a question that most political scientists are trained to write about, or, in many cases, even to be conscious of. I view reflexivity as a concern that pervades all forms of question formation, information gathering, and analysis. However, I do think there are certain elements that are specific to interviews, and maybe you had some of these in mind. We briefly discuss vignettes in our chapter where we were flat out lied to by people in power, or where two interviewers in the same room interpreted statements in diametrically opposite ways. For us, these were issues related to cultural and/or linguistic competence of interviewers. In other words, if your specific situation means you do not have enough cultural or linguistic competence, you might dramatically misunderstand the meaning of the interview. It is also possible, for example, that as a white man with pretty good German fluency, I get different answers to questions posed to German far right sympathizers than someone from a different demographic.

These sorts of concerns are more specific to the interview setting. They are factors that are important in addition to the broader issues of reflexivity that you raise. I think one big challenge is to raise awareness of the multiple ways in which our situations affect our research questions, our choice of methods, and in particular our interviews. Being open about how we've weighed these up can be very helpful in my opinion.

Re: Let's Focus on Research Transparency

Posted: Sat Dec 31, 2016 3:26 pm
by pollack57
Thanks, everyone, for your feedback on this point as well as in the other threads. From these exchanges, I draw two tentative conclusions about research transparency in human subjects research.

First, by contrast with data access, which triggers significant concerns about ethical dilemmas involving commitments to populations who are vulnerable or to whom we have promised confidentiality, Bleich, Pekkanen and others identify multiple ways in which scholars can carefully increase the transparency of their research methods without compromising their commitment to human subjects.

Second, however, several participants in these exchanges noted (as I would as well) that traditional methods training, which is historically weak on interview methodology, is particularly so on questions of transparency and reporting of interview data. This is, therefore, an area where identification of useful sources (like Bleich and Pekkanen) and best practices (like those highlighted by Alice Kang) can be especially helpful to researchers and editors alike.

MAP

Re: Let's Focus on Research Transparency

Posted: Sat Dec 31, 2016 8:04 pm
by Tasha Fairfield
pollack57 wrote:Second, however, several participants in these exchanges noted (as I would as well) that traditional methods training, which is historically weak on interview methodology, is particularly so on questions of transparency and reporting of interview data. This is, therefore, an area where identification of useful sources (like Bleich and Pekkanen) and best practices (like those highlighted by Alice Kang) can be especially helpful to researchers and editors alike.

MAP


These are very helpful reflections, thank you. Regarding 'best practices' for reporting interviews, I have a concerns that I'd like to share drawing from a related discussion on the comparative/process tracing blog page (see "Presenting our evidence", https://www.qualtd.net/viewtopic.php?f=20&t=124). It seems to me that the person who is best positioned to make decisions about how interviews should be reported will be the author--any 'best practices' that we come up with are going to have to be very flexible, not only to handle a wide range of human subject concerns, but also to accommodate the different ways that interviews are used analytically--whether interpretive, process tracing, survey research, or some other methodological / ontological approach.

For example, regarding the Bleich and Pekkanen piece mentioned above, I think its important to point out that their perspective comes from survey research, where the norm is random sampling from a population, and things like response rates, non-response bias, and representativeness of the sample matter quite a lot for orthodox large-N statistical inference. However, these concerns by and large are not relevant to process tracing/case study research, where scholars usually are not aiming to infer the ‘average view’ or the mean value of some trait in a population (e.g. local NGOs, governing party MPs); instead, we are looking for key informants who can provide us with diagnostic evidence to assess alternative hypotheses, following Bayesian logic. Of course we want to dig hard for evidence and try to reach everyone who is in a position to share relevant information, but the strength of inference is not based on how many people we interviewed, of which type, or how long the interviews lasted, but whether the information we uncovered is strongly decisive in weighing in favor of a working hypothesis against a rival. In my view, asking someone who does process tracing research to report this kind of detail simply creates busy work that will not help readers understand the analysis--to the contrary, these details might encourage a reader to evaluate the work based on a methodological template that simply does not fit the research. I imagine interpretive scholars would have similar concerns.