Page 1 of 1

[off topic] Introductions!

Posted: Fri Apr 08, 2016 12:41 pm
by paolospada
Hi all,
I propose a thread in which we can introduce ourselves and maybe add why we got involved in this and our top 3 problems with Da-rt. The latter should be a quick simple preview, we can discuss things in details in specific threads. This is more of a survey of the general sentiment of the community. So do not worry about repeating arguments posted by others or not explaining things in detail.

I am starting, my name is Paolo Spada, I am postdoc at Southampton University, and I work with mixed methods on Democratic Innovations.

I got involved in this discussion at APSA when I attended a panel on ethics of research in which Elizabeth Wood analyzed some of the problems of imposing standards devised for quantitative research on qualitative research.

I am not a primarily qualitative researcher so I will probably read most of the things the other members of the community will write, ask clarifications when I do not get something and maybe jump in if somebody starts a discussion on QCA or integration of quant/qual data.

3 quick things about Da-rt:

1) I have done limited qualitative research in low risk environments during my projects, but the very little I have done has generated boxes full of notes and field diaries, half of which I can barely understand myself after a few years. My primary method of research is statistical analysis and experiments, I conduct participants observations and informal interviews before, during and after my quantitative work for obvious reasons, I can't design any decent impact evaluation without speaking to the people that are living what I need to evaluate and without participating myself in it. I use this data mostly for framing, story telling and theory building. The idea that I have to scan those notes that are basically personal reminders of life experiences is surreal. Without a massive amount of work they would be meaningless to anybody. And I write badly in 3 languages and I take notes usually in a weird mix of them depending on the setting. On the other hand I do structured and semi-structured interviews, those instead would be easier to upload precisely because I have digital records and I get consent before each one. Studying democratic innovations many of my interviews are with elites and public figures and have very low risk if consent is given. My other interviews with participants and NGO organizers that criticize heavily local governments are another story. They might get their funding cut or have some other reputation costs, hence even in a zero physical risk field like mine I often discuss with the interviewee at length what to do. Many times they want to have their voice published, other times they think nobody will ever read my academic stuff, lastly some time they tell me to not even quote that I had an interview with anybody and use the material as if I had come-up with it via participant observations.

2) The idea that there is technological solution to malpractice in academia that to me appears to be a driver of this is problematic. Replication from original data only allows to check internal consistency. There is no way to assess that the data is fake without new research. Replication is actually a confusing word even if we just limit the focus on simple quantitative studies, because it alludes to two separate things: 1) checking that the code runs on the dataset and reproduces the exact same results in the published paper (transparency?), 2) doing a new study with the same methodology to explore external validity (replication?). The dutch psychologist that faked more than 50 experiments was caught not because somebody could not replicate one of his experiments using his code and data, but because a grad student denounced him. The same occurred with the LaCour experiment fraud, it was uncovered only when new data was generated.

3) Da-art emerged because we have finally realized that we can't replicate almost anything in published quantitative research because there is little focus on data & code transparency practices in our teaching of quant methods. While there is a teaching value in replicating quantitative research because students and researchers learn the code, it is unclear to me that the same value can be generated from field notes in ethnographic articles and books such as the Weapons of the Weak. Research based on structured interviews that we transcribe in low risk settings, process tracing, QCA, archival research in which everything is public, high-risk research appears to be all very different stories. There are many standards and without this type of conversation to define what works for each sub-field and method forcing a standard for all will generate all sorts of biases. The quant standard of da-art works fairly well adding some sort of embargo or limitations to prevent scooping, we got one semi-right, now we need a few more.

cheers!
Paolo
Southampton University