Computational systems biology seems to be caught in what we call the ‘self‐assessment trap’, in which researchers wishing to publish their analytical methods are required by referees or by editorial policy (e.g., Bioinformatics, BMC Bioinformatics, Nucleic Acids Research) to compare the performance of their own algorithms against other methodologies, thus being forced to be judge, jury and executioner. The result is that the authors’ method tends to be the best in an unreasonable majority of cases (Table I). In many instances, this bias is the result of selective reporting of performance in the niche in which the method is superior. Evidence of that is that most papers reporting best performance choose only one or two metrics of performance, but when the number of performance metrics is larger than two, most methods fail to be the best in all categories assessed (Table I). Choosing many metrics can dramatically change the determination of best performance (Supplementary Table S1). Selective reporting can be inadvertent, but in some cases biases are more disingenuous, involving hiding information or quietly cutting corners in the performance evaluation (similar problems have been discussed in assessments of the performance of supercomputers, e.g., Bailey (1991)).
Even assuming that there is no selective reporting, we would like to argue that papers reporting good‐yet‐not‐the‐best methods (of which we found none in our literature survey of self‐assessed papers listed in the Supplementary information) can still advance science. For example, a method that is not top ranked can still have value by unearthing biological results that are complementary to the results reported by other better performing methods. Furthermore, the effectiveness of a top‐performing algorithm can be boosted when its results are aggregated with second and third best performers (Figure 1, and Supplementary Figures S1 and S2; Marbach et al, 2010; Prill et al, 2010). The discussion above suggests that self‐evaluation is suspect and that insistence on publication of only best performing methods can suppress the reporting of good‐yet‐not‐best performing methods that also have scientific value.
In biosciences, as well as in other natural sciences, we are often faced with situations that have been referred to as uncomfortable science, a term attributed to statistician John Tukey, in which the little available data are used both in the inference model and the confirmatory data analysis. The resulting overoptimistic ‘confirmatory’ results are often referred to as ‘systematic bias’. Similarly, ‘information leak’ from data to methods can occur from improper and repeated cross‐validation. In the general case, information leak results from developing or training an algorithm based on the entire available data set so that the test set is not independent. In some cases, the leak can occur subtly and inadvertently such as when a very similar sample is present both in training and test set. A better‐known effect is ‘overfitting’, in which a model is developed with superior accuracy on its training data at the cost of reduced generalization of the model to new data sets. A notable example of this effect can be found in the search for biomarker signatures in cancer. For about a decade, scientists have scoured high‐throughput data to find collections of genes or proteins that can be used in diagnosis or prognosis of cancer. However, the tools used to find signatures in massive data sets can yield spurious associations with phenotype (Ioannidis, 2005), even when the results appear to be statistically sound in self‐assessment. In most cases, unfortunately, these signatures do not generalize; taken to the task of showing the diagnostics or prognostics value of these signatures, the accuracy of the predictions is much poorer on impartial assessments on previously unseen patients than on the original data. This problem with cancer signatures is of sufficient general interest to be highlighted recently in the popular media (Kolata, 2011).
In order to alleviate the overestimation of accuracy from the many bias sources described above, we proposed a few guidelines:
use third‐party validation to test a model with previously unseen data
use more than one metric to evaluate the methods
report well‐performing methods even if they are not the best performers on a particular data set
increase the awareness of editors and reviewers that superior performance in self‐assessment is a biased demonstration of the method's value; instead, impartial assessment should be the preferred evaluation
Establish a scientific culture that values timely, well‐conducted follow‐up studies that confirm or refute previous results
To a large extent, the remedies suggested above have been addressed in the context of genome‐wide association studies (Chanock et al, 2007), and are embodied in existing independent assessments presented to the scientific community in efforts such as CASP (http://predictioncenter.org/), CAPRI (http://www.ebi.ac.uk/msd‐srv/capri/) and DREAM (http://www.the‐dream‐project.org). In contrast to the usual practice of ‘post‐diction’ (retrospective prediction) of known results as a way to test their methods, participants to these third‐party collaborative competitions (alternatively known as challenges) submit predictions that are evaluated by impartial scorers against an independent data set that is hidden from the participants. The level of performance in these evaluations better tests the generalization ability of the methods, because the predictions are made based on unseen data, thus minimizing many of the above‐discussed biases. We envision that a repository of blind challenges and data sets could be created (DREAM, for example, has 20 such data sets and challenges) with data produced on demand by third parties, especially funded to create verification data and challenges. This repository could be used to test the validity of many of the tasks that we deal with in Systems Biology, Bioinformatics and Computational Biology.
In summary, systematic bias, information leak and overfitting can all be considered facets of the same self‐assessment trap. That is, by knowing too much about the desired results, the researcher gets snared into a trap of consciously or unconsciously overestimating performance. Moreover, the researcher is further lured to the trap by the common assumption that top performance is required for scientific value and publication. By exposing the self‐assessment trap, we hope to lessen its effect with the ultimate goal of advancing predictive biology and improving human healthcare.
Conflict of Interest
The authors declare that they have no conflict of interest.
The self‐assessment trap – Supplementary Materials
This supplement provides further support for the claims made in the main text. [msb201170-sup-0001.doc]
This is an open‐access article distributed under the terms of the Creative Commons Attribution License, which permits distribution, and reproduction in any medium, provided the original author and source are credited. This license does not permit commercial exploitation without specific permission.
- Copyright © 2011 EMBO and Macmillan Publishers Limited