III.1. Comparative methods and process tracing
We encourage contributors to the Discussion Board to publicly identify by registering and logging in prior to posting. However, if you prefer, you may post anonymously (i.e. without having your post be attributed to you) by posting without logging in. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.
Instructions
To participate, you may either post a contribution to an existing discussion by selecting the thread for that topic (and then click on "Post Reply") or start a new thread by clicking on "New Topic" below.
For instructions on how to follow a discussion thread by email, click here.
-
Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln - Posts: 20
- Joined: Tue May 24, 2016 5:45 am
What problem does access to evidence and transparency address?
Post Reply
-
Marcus Kreuzer
Villanova University - Posts: 26
- Joined: Sat Apr 16, 2016 9:48 am
Re: What problem does access to evidence and transparency address?
I believe that causal process tracers, or qualitative scholars more broadly, have a much expansive understanding of analytical transparency that is rooted in Bayesian analysis. The connections of "data and claim" are conditional on making explicit the following elements:
1. Explicating Priors: This involves clearly stating how much foreknowledge on a given subject is available, how much we know, how much we still don't know, and how such foreknowledge affects the confidence we can have in a test result. In short, analytical transparency requires a close and careful review of the literature.
2. Specificity: How many predictions does a particular theory make relevant to competing theories. The number of predictions makes tests riskier and thus generates more robust results. Analytical transparency thus requires we stop treating every test hypothesis as being created equal and differentiate them in terms of their specificity.
3. Test Strength: How many competing theories does a particular test evaluate, and how different are the predictions of these competing tests? Tests become strong to the extent that the competing hypotheses are more different/unique and more alternative hypothesis are tested. Analytical transparency thus requires more information about what control variables were chosen and which ones were ignored.
4. Ontological Assumptions: What assumptions does a test make about the uniformity of evidence across the cases, the independence of evidence across those cases, or temporal structure of causation (Pierson). Also, are the test results conditional on some geographic or historical boundary conditions.
Various process tracers are trying to make those four criteria key building blocks of analytical transparency. They are well articulated but not yet widely used. (See Ingo's work, Bennett&Checkel, Peter Hall's "Aligning Methodology and Ontology", Beach&Pedersens who have written on this)
So the question becomes whether those four criteria fit DA-RT's understanding of analytical transparency?
ingorohlfing wrote:What problems do access and transparency diminish, in your view?
Post Reply
-
Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln - Posts: 20
- Joined: Tue May 24, 2016 5:45 am
Re: What problem does access to evidence and transparency address?
Post Reply
-
Macartan Humphreys
Re: What problem does access to evidence and transparency address?
Re another thread on formalization/quantification opened by Tasha, I’d also suggest not mixing up the general issue of transparency with quantification. Quantification may or may not be possible or desirable in different settings but I don’t see any argument that it is necessary or sufficient for analytic transparency.
I think the issue around analytic transparency for process tracing is more basic: **can researchers provide a mapping from the sort of evidence they might see (and seek) to the sorts of conclusions they might draw**, whether or not the conclusions are expressed as posterior probabilities?
Marcus gives a nice example of this for a strategy in which you want a Bayesian conclusion. I think that is a fruitful way to go. But it is not the only way. One could imagine something similar using a falsificationist approach without any specification of priors and with simpler conclusions (e.g. reject / fail to reject).
The hard challenge that any version of this will face I think is the difficulty of specifying in advance the sorts of patterns that might be seen, and how they would be interpreted, given the complexity of qualitative data and the possibility of unexpected, but still interpretable, patterns. I'd be interested in hearing thoughts on ways to address that challenge.
Post Reply
-
Hillel Soifer
Temple University - Posts: 8
- Joined: Wed Apr 13, 2016 9:12 am
Re: What problem does access to evidence and transparency address?
Macartan Humphreys wrote:The hard challenge that any version of this will face I think is the difficulty of specifying in advance the sorts of patterns that might be seen, and how they would be interpreted, given the complexity of qualitative data and the possibility of unexpected, but still interpretable, patterns. I'd be interested in hearing thoughts on ways to address that challenge.
Thanks for this thoughtful post, Macartan. I think you very nicely phrased the core challenge in making process-tracing research transparent. This is indeed the challenge we're grappling with, though there is some disagreement among qualitative scholars about whether transparency is something that requires specifying in advance or whether one can be transparent in one's research even if observable implications are not all identified in advance. I just want to tease apart those two issues, and suggest that transparency about interpreting evidence seems to me to face the same issues you highlight (in terms of specifying patterns relevant to testing causal claims and setting standards for interpretation in a context of complex data and the possibility of complex findings) even if a researcher does not agree that this all must be specified in advance.
Post Reply
-
Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln - Posts: 20
- Joined: Tue May 24, 2016 5:45 am
Re: What problem does access to evidence and transparency address?
I also agree with Hillel that transparency does not require specifying expectations in advance, as this is impossible in exploratory research. However, it is a problem if one does not know how sources were selected and processed, how the case was chosen etc., because these issues determine what we infer about a case and, possibly, to what cases we generalize. This holds regardless of whether the study is exploratory or confirmatory.
Post Reply
-
Jane Mansbridge
Harvard Kennedy School - Posts: 8
- Joined: Sat Apr 23, 2016 4:53 pm
Re: What problem does access to evidence and transparency address?
Post Reply
-
Tasha Fairfield
LSE - Posts: 17
- Joined: Mon Sep 05, 2016 4:05 pm
Re: What problem does access to evidence and transparency address?
We are aiming to discuss both process tracing and comparative analysis. On process tracing, you might take a look at the thread on Bayesianism and alternatives, where we've posted some links to current works on this topic. More generally, we would welcome any thoughts on specific problems that should be addressed in the current practice of process tracing and/or comparative analysis, as well as examples of excellent qualitative research that we can build on (see the thread on existing practices).
Post Reply
-
Guest
Re: What problem does access to evidence and transparency address?
This would seem to present not just a "hard" challenge, but an impossible standard. In the context of qualitative research into complex, real-world socio-political phenomena, there is basically no limit to the different pieces and sorts of evidence that might be encountered, and no way to exhaustively catalogue the infinite possibilities and pre-analyze all of them in advance.
Fortunately, there is no need to do so. Any reasonable standard of scientific transparency need not demand that researchers provide ahead of time a full mapping between possible evidence and possible conclusions, but only require that they endeavor to supply an honest, rational, and critical assessment of the evidence actually obtained.
Whether the supplied evidence and its analysis are considered cogent is a question for the larger community of scholars who may debate and ultimately accept or question the conclusions. But whether the original analysis was somehow generated before the actual data were collected and pre-registered seems largely beside the point, logically speaking. Peer reviewers and other scholars should assess and critique the argument in the same way and using the same logical, statistical, or historical counter-arguments either way.
Before heading out on the Beagle, Charles Darwin had not even dreamed up the theory of natural selection, and certainly could not have predicted and analyzed ahead of time the forms or behaviors of all possible organisms he might observe. This did not make his eventual insights any less compelling.
Post Reply
-
Guest
Re: What problem does access to evidence and transparency address?
I would argue that the Bayesian nature of process-tracing does not rise or fall on whether priors are specified. Any time probabilistic language or logic is used, implicitly or explicitly, but the probabilities cannot be naturally interpreted as long-run relative frequencies in some random trials, then the analysis may be regarded as Bayesian, at least to some extent.
Frequentist critics of Bayesianism often focus on the latter's need for prior probabilities, but often this is not even the most salient contrast.
Post Reply
-
Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln - Posts: 20
- Joined: Tue May 24, 2016 5:45 am
Re: What problem does access to evidence and transparency address?
Post Reply
-
Ingo Rohlfing
Cologne Center for Comparative Politics, Universität zu Köln - Posts: 20
- Joined: Tue May 24, 2016 5:45 am
Re: What problem does access to evidence and transparency address?
I am not familiar with Darwin's discoveries in much detail, but there is a difference between exploratory and confirmatory research (after all, we speak of the Texas sharpshooter problem in confirmatory research for a reason). An assessment of arguments is valuable, but we cannot judge the empirical analysis only by discussing the conclusions. It might already figure in some thread in this forum: If you do interviews, I want to know whom you interviewed and how you selected the interviewees because you might get different evidence from different interviewees. A critical assessment of how the evidence was gathered and a critical assessment of how the evidence was interpreted need to go hand in hand, in my view.
Guest wrote: Any reasonable standard of scientific transparency need not demand that researchers provide ahead of time a full mapping between possible evidence and possible conclusions, but only require that they endeavor to supply an honest, rational, and critical assessment of the evidence actually obtained.
Whether the supplied evidence and its analysis are considered cogent is a question for the larger community of scholars who may debate and ultimately accept or question the conclusions. But whether the original analysis was somehow generated before the actual data were collected and pre-registered seems largely beside the point, logically speaking. Peer reviewers and other scholars should assess and critique the argument in the same way and using the same logical, statistical, or historical counter-arguments either way.
Before heading out on the Beagle, Charles Darwin had not even dreamed up the theory of natural selection, and certainly could not have predicted and analyzed ahead of time the forms or behaviors of all possible organisms he might observe. This did not make his eventual insights any less compelling.
Post Reply
-
Tasha Fairfield
LSE - Posts: 17
- Joined: Mon Sep 05, 2016 4:05 pm
Re: What problem does access to evidence and transparency address?
On point 3: Specificity--as Marcus defines it in terms of the "number of predictions" that one theory makes compared to another--is not relevant. Bayesian probability is not about counting up predictions, or counting up anything really. What matters is the likelihood ratio of the evidence--how probable would the evidence be if we imagine that one hypothesis is true, as compared to a rival hypothesis? In other words, which hypothesis makes what we see more plausible? This is the key inferential step that governs how we update the odds on one hypothesis being correct vs. a rival hypothesis. In our paper on explicit Bayesian process tracing, A.E. Charman and I lay out guidelines for how to assess likelihood ratios, by "mentally inhabiting the world" of each hypothesis, so to speak.
On point 3, test strength does not have to do with the number of predictions a theory makes or the number of theories being tested. Instead, test strength relates to the probative value of the evidence--once again, the likelihood ratio under rival hypotheses. We argue in our paper that the notion of subjecting a hypothesis to a series of tests is too close in sprit to a frequentist perspective; we advocate simply evaluating priors and likelihoods and updating by direct application of Bayes' rule. However, we also offer a detailed discussion of how to incorporate Van Evera's process tracing tests into a Bayesian framework, building on Humphreys and Jacobs (2015) but arguing that likelihood ratios and/or the concept of relative entropy provide the most sensible classification.
You can read our discussion about process tracing tests here, in Section 5: http://tashafairfield.wixsite.com/home/research
We welcome comments on the paper.
Post Reply