Is Science Broken or Not?

Submitted by philipp on Fri, 08/21/2015 - 11:12

Science isn’t broken is a marvelous article on science, careless data interpretation, and the cult of publishing only significant results.

In the last two nibbles on this site I linked to articles that criticized the practice of “p-hacking” (achive.org) and illustrated the “conformation bias (/node/2108)” that – in combination – lead to innumerable bogus scientific research papers and an increasing number of serious scientific outlets that start to change their review and publication policies.

“p-hacking” refers to the technique of including vague independent and dependent variables in empirical studies, without having a-priori defined hypotheses on their interrelationships. This often leads to the practice of fiddling around with the empirical data and iterating through each and every combination of variables until significant results emerge, as only significant results are considered as veritable (and publishable) results.

This procedure is careless and harmful, as with an increasing number of statistical tests the probability of finding erroneous significant results converges against 1. Actually, the number of variables needed to include in a study to achieve significant findings just by chance and random noise in the data can be calculated (as in the “Chocolate Helps Weight Loss” article).

The problem here is that after significant findings are found the conformation bias kicks in and quickly builds a cognitive model around why the discovered significant relationship must be true and that one always knew that.

The article “Science isn’t broken” discusses p-hacking and conformation biases in more detail, provides a nifty interactive dataset where you can investigate the question whether democrats or republicans better for the US economy and find evidence for each possible answer through p-hacking on real data.

To reduce the number of papers that presenting random findings in journals and on conferences I strongly argue for a-priori (and maybe pre-submitted?) hypotheses. This more-or-less rules out the bad habit of searching and interpreting the significant, though often irrelevant, relationships in data sets and reduces the effect of the conformation bias. Obviously, a dataset can be exploratory investigated after formally iterating through all hypotheses.

See also:

  1. G. Gigerenzer, „Mindless statistics“ (2004), The Journal of Socio-Economics. 33, 587–606. doi:10.1016/j.socec.2004.09.033.
  2. P. Wason, "On the failure to eliminate hypotheses in a conceptual task" (1960), Quarterly Journal of Experimental Psychology 12 (3): 129–140, doi:10.1080/17470216008416717