Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Why is the winner the best?

Conference ·

International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi- center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), im- age preprocessing (97%), data curation (79%), and post- processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.

Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
Helmholtz Association of German Research Centres; National Institutes of Health (NIH); Netherlands Organisation for Scientific Research (NWO); Engineering and Physical Sciences Research Council (EPSRC); Wellcome Trust; India Science and Engineering Research Board (SERB); European Union - Horizon 2020 Research and Innovation Programme; German Research Foundation (DFG); Agence Nationale de la recherche (ANR)
DOE Contract Number:
AC02-06CH11357
OSTI ID:
2005225
Resource Relation:
Conference: 2023 Conference on Computer Vision and Pattern Recognition , 06/18/23 - 06/22/23, Vancouver, CA
Country of Publication:
United States
Language:
English

References (15)

Survey research: Process and limitations journal January 2009
U-Net: deep learning for cell counting, detection, and morphometry journal December 2018
An objective comparison of cell-tracking algorithms journal October 2017
Crowdsourcing Data Science: A Qualitative Analysis of Organizations’ Usage of Kaggle Competitions conference January 2020
Evaluation of measures for assessing time-saving of automatic organ-at-risk segmentation in radiotherapy journal January 2020
Quantitative evaluation of software packages for single-molecule localization microscopy journal June 2015
Methods and open-source toolkit for analyzing and visualizing challenge results journal January 2021
Fitting Linear Mixed-Effects Models Using lme4 journal January 2015
Objective comparison of particle tracking methods journal January 2014
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) journal October 2015
BIAS: Transparent reporting of biomedical image analysis challenges journal December 2020
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation journal December 2020
Comparative evaluation of autocontouring in clinical practice: A practical method using the Turing test journal October 2018
The challenge of mapping the human connectome based on diffusion tractography journal November 2017
Why rankings of biomedical image analysis competitions should be interpreted with care journal December 2018

Related Subjects