DA-RT will not raise or restore public trust in our discipline
Posted: Thu Apr 28, 2016 4:16 pm
* Posted anonymously by a tenure-track assistant professor at an elite liberal arts college *
Beyond the excellent critiques raised already, I also believe DA-RT skeptics should be asking, why now? DA-RT follows several well-publicized stories about the replicability crisis in behavioral and psychological sciences, and the much-discussed fraudulent voter persuasion study. The media attention that these stories receive fuels an already-strong anti-science, anti-intellectual bias in the American imagination. Simultaneously, leading figures in the discipline have tried for decades to elevate political science to a harder science, placing political science on par with economics, mathematics, and even neuroscience. DA-RT emerges from this perfect storm: a public crisis of confidence in the Academy meets the ongoing agenda of some political science elites, resulting in a new paradigm that narrows the questions and methodologies appropriate in the discipline. (For more on how this narrowing occurs, see other posts on this forum.)
The opportunism of DA-RT is deeply troubling. First, DA-RT takes advantages of the negative publicity surrounding edge cases (such as the fraudulent voter persuasion study), in order to justify its far-reaching interventions. DA-RT thus suggests that the publication of erroneous research is not the exception, but the norm. Second, and related, DA-RT implies that peer review is broken, because the current system cannot detect fraudulent or suspect research. Instead, raw data must be released, examined prior to publication, and made subject to continued analysis after publication, because only access, transparency, and replicability guard against researchers’ outright lies or accidental inaccuracies.
But if negligence is widespread, and peer review cannot detect it, then why rely on the double-blind vetting of articles at all? DA-RT suggests that the best judge of empirical inference comes not from anonymous reviewers, who bring their own expertise to bear on the subject, but from consistent and coordinated processes of data verification. Consequently, researchers could simply post all their data, their methodological appendices, and their inferences on-line; those with the requisite skills could then replicate the work; and the researchers’ findings could get voted up or down on the basis of this crowd-sourced data checking. Once every ingredient of the research process is made open access, the rationale for a closed-door vetting process conducted by anonymous reviewers and journal editors disappears.
Yet DA-RT architects do not intend to eliminate peer review. They intend to make political science more reliable, to elevate its public standing, and more broadly, to restore society’s faith in the scientific method and the wisdom it generates. However, DA-RT cannot achieve these goals.
Data repositories containing transparency indexes, codebooks, program scripts, interview transcripts, and other methodological documents are a misguided response to Americans poor scientific literacy. Providing a skeptical and dismissive public with even more impenetrable documentation will only fuel the cynicism surrounding academic research. For this reason, the end of peer review remains far off: despite DA-RT’s premise that peer review has failed, few lay people have the training, resources, and time to verify academic research. The move towards Big Data, of which DA-RT forms part, cannot democratize knowledge when interpreting the material requires rarefied skills. DA-RT will not restore congressional funding or enhance our ability to influence policy design. Instead, DA-RT risks widening the very divide—that between the Academy and the public—that it purports to erase.
Something must be done about the anti-intellectual fervor that grips the electorate and guides legislators. However, the costs to qualitative researchers, junior scholars, and under-resourced professors (as documented throughout this website) are too high to justify the discipline’s negligible rewards from imposing DA-RT. Moreover, DA-RT takes a fundamentally American – or perhaps Western – problem with scientific illiteracy and anti-intellecutalism, and imposes a one-size-fits-all paradigm on the global political science community. Lastly, we should reject the implication that most work published in top journals is purposely fraudulent or unintentionally incorrect. As scholars build on their peers’ and predecessors’ work, they are calling previously published findings into account—showing that the current system works rather well.
Beyond the excellent critiques raised already, I also believe DA-RT skeptics should be asking, why now? DA-RT follows several well-publicized stories about the replicability crisis in behavioral and psychological sciences, and the much-discussed fraudulent voter persuasion study. The media attention that these stories receive fuels an already-strong anti-science, anti-intellectual bias in the American imagination. Simultaneously, leading figures in the discipline have tried for decades to elevate political science to a harder science, placing political science on par with economics, mathematics, and even neuroscience. DA-RT emerges from this perfect storm: a public crisis of confidence in the Academy meets the ongoing agenda of some political science elites, resulting in a new paradigm that narrows the questions and methodologies appropriate in the discipline. (For more on how this narrowing occurs, see other posts on this forum.)
The opportunism of DA-RT is deeply troubling. First, DA-RT takes advantages of the negative publicity surrounding edge cases (such as the fraudulent voter persuasion study), in order to justify its far-reaching interventions. DA-RT thus suggests that the publication of erroneous research is not the exception, but the norm. Second, and related, DA-RT implies that peer review is broken, because the current system cannot detect fraudulent or suspect research. Instead, raw data must be released, examined prior to publication, and made subject to continued analysis after publication, because only access, transparency, and replicability guard against researchers’ outright lies or accidental inaccuracies.
But if negligence is widespread, and peer review cannot detect it, then why rely on the double-blind vetting of articles at all? DA-RT suggests that the best judge of empirical inference comes not from anonymous reviewers, who bring their own expertise to bear on the subject, but from consistent and coordinated processes of data verification. Consequently, researchers could simply post all their data, their methodological appendices, and their inferences on-line; those with the requisite skills could then replicate the work; and the researchers’ findings could get voted up or down on the basis of this crowd-sourced data checking. Once every ingredient of the research process is made open access, the rationale for a closed-door vetting process conducted by anonymous reviewers and journal editors disappears.
Yet DA-RT architects do not intend to eliminate peer review. They intend to make political science more reliable, to elevate its public standing, and more broadly, to restore society’s faith in the scientific method and the wisdom it generates. However, DA-RT cannot achieve these goals.
Data repositories containing transparency indexes, codebooks, program scripts, interview transcripts, and other methodological documents are a misguided response to Americans poor scientific literacy. Providing a skeptical and dismissive public with even more impenetrable documentation will only fuel the cynicism surrounding academic research. For this reason, the end of peer review remains far off: despite DA-RT’s premise that peer review has failed, few lay people have the training, resources, and time to verify academic research. The move towards Big Data, of which DA-RT forms part, cannot democratize knowledge when interpreting the material requires rarefied skills. DA-RT will not restore congressional funding or enhance our ability to influence policy design. Instead, DA-RT risks widening the very divide—that between the Academy and the public—that it purports to erase.
Something must be done about the anti-intellectual fervor that grips the electorate and guides legislators. However, the costs to qualitative researchers, junior scholars, and under-resourced professors (as documented throughout this website) are too high to justify the discipline’s negligible rewards from imposing DA-RT. Moreover, DA-RT takes a fundamentally American – or perhaps Western – problem with scientific illiteracy and anti-intellecutalism, and imposes a one-size-fits-all paradigm on the global political science community. Lastly, we should reject the implication that most work published in top journals is purposely fraudulent or unintentionally incorrect. As scholars build on their peers’ and predecessors’ work, they are calling previously published findings into account—showing that the current system works rather well.