Research & Reporting Practices in Psychology

I struggle often with the way research is conducted and reported in psychology. Many times, I find that I am disappointed with psychology as a field. It is the exception that I finish a paper and feel the authors actually gave me a complete picture of their study and results. Usually I just think that aspects that did not turn out to be of interest or statistically significant or were too messy were simply omitted.

[N. B. This is a someone scattered post. I will revise and clarify over time, but for now it is a rough log of my thoughts and various resources on the topic.]

The current research process seems inefficient and wasteful to me. Individual studies typically collect more data than is analyzed, so that large amounts of data are wasted as we (the field) suffers from lack of replicability due to A) small samples and B) insufficient accounting for population differences. Both of which would be aided by either larger studies or multi-arm studies where many small studies share common components that can then be easily aggregated. Meta-analysis partially addresses this, but even that is hampered by insufficient data and results being reported to include all published work (and this for main effects, let alone enough data to do more sophisticated things like examine moderation effects).

I believe it is feasible now to have a system for studies much like medical records–that would allow common variables to be linked up from studies across the country. The questions that could be answered with that type of data and the possibility for really trusting the results would be phenomenal. Barring large studies with carefully documented diverse samples, it seems like real transparency about the populations being studied, the methods, all the choices and decisions made, and the results would be incredibly valuable. Even if not as a core part of articles for brevity sake, in some stored supplement. Something that could serve as a reference for later meta analysts, other researchers actually trying to do similar studies, or critics.

I know that it would not be a panacea, but increased data sharing and transparency could go a long way towards increasing trust in our results and reducing fraud.

Examples of high profile fraud:

http://www.bmj.com/content/345/bmj.e4596

http://www.nytimes.com/2011/11/03/health/research/noted-dutch-psychologist-stapel-accused-of-research-fraud.html

Possible fraud detection:

http://news.sciencemag.org/scienceinsider/2012/07/fraud-detection-tool-could-shake.html

The benefits of sharing ones scientific process in detail:

http://precedings.nature.com/documents/39/version/1

A nice report from SPSP over viewing some of the fraud and a report from the task force on things that can be done:

http://www.spsp.org/?ResponsibleConduct

What the task force did not mention (perhaps not all available when they met) is that there are some tools and technology in place or available.

For example, for replications:
http://psychfiledrawer.org/

Transparent documentation and archiving:
http://openscienceframework.org/

At an extreme, something like this is feasible:

http://yihui.name/en/2012/03/a-really-fast-statistics-journal/

Aspects are not relevant because that was geared towards statistics, but many aspects could be used if we switched. Programs exist that allow data to be linked to code that runs analyses that are linked to results tables. That is, the tables you see reported are automatically generated from the code that analyzes the data. The data, code, and final write up could all be documented and archived. This at least makes the analyses fully transparent and for fraud to occur, it must be done in the raw data. Not impossible, but more challenging.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>