The article assesses the developments in automated phenotype pattern recognition: Potential spikes in classification performance, even when facing the common small-scale biomedical data set, and as a reader, you will find out about changes in the development effort and complexity for researchers and practitioners. After reading, you will be aware of the benefits and unreasonable effectiveness and ease of use of an automated end-to-end deep learning pipeline for classification tasks of biomedical perception systems.
Get full access to this article
View all access options for this article.
References
1.
SchuteraM, DickmeisT, MioneM, et al.Automated phenotype pattern recognition of zebrafish for high-throughput screening. Bioengineered, 2016; 7(4):261–265; doi: 10.1080/21655979.2016.1197710.
2.
KimM, YanC, YangD, et al. Deep learning in biomedical image analysis. In: Biomedical Information Technology. (Feng DD. Ed.) Academic Press: Amsterdam, The Netherlands; 2020, pp. 239–263.
3.
LeCunY, BoserB, DenkerJS, et al.Backpropagation applied to handwritten zip code recognition. Neural Comput, 1989; 1(4):541–551; doi: 10.1162/neco.1989.1.4.541.
SantorielloC, GennaroE, AnelliV, et al.Kita driven expression of oncogenic HRAS leads to early onset and highly penetrant melanoma in zebrafish. PLoS One, 2010; 5(12):e15170; doi: 10.1371/journal.pone.0015170.
AbadiM, AgarwalA, BarhamP, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. 2016.
8.
ZhuangF, QiZ, DuanK, et al.A comprehensive survey on transfer learning. Proc IEEE, 2020; 109(1):43–76; doi: 10.1109/JPROC.2020.3004555.
9.
SzegedyC, VanhouckeV, IoffeS, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE: Las Vegas, NV; June 27–30, 2016; pp. 2818–2826.
10.
KingmaDP, BaJ. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014.
11.
PrecheltL. Early stopping—But when? In: Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. (Montavon G, Orr GB, Müller KR. eds.) Springer: Berlin, Heidelberg; 2012, pp. 53–67.
12.
SrivastavaN, HintonG, KrizhevskyA, et al.Dropout: A simple way to prevent neural networks from overfitting. J Machine Learn Res, 2014; 15(1):1929–1958.
13.
MoradiR, BerangiR, MinaeiB. A survey of regularization strategies for deep models. Artif Intell Rev, 2020; 53(6):3947–3986; doi: 10.1007/s10462-019-09784-7.
14.
ShortenC, KhoshgoftaarTM. A survey on image data augmentation for deep learning. J Big Data, 2019; 6(1):1–48; doi: 10.1186/s40537-019-0197-0.
15.
SchuteraM, RettenbergerL, PylatiukC, et al.Methods for the frugal labeler: Multi-class semantic segmentation on heterogeneous labels. PLoS One, 2022; 17(2):e0263656; doi: 10.1371/journal.pone.0263656.
16.
HalevyA, NorvigP, PereiraF. The unreasonable effectiveness of data. IEEE Intell Syst, 2009; 24(2):8–12; doi: 10.1109/MIS.2009.36.
17.
SchuteraM, JustS, GiertenJ, et al.Machine learning methods for automated quantification of ventricular dimensions. Zebrafish, 2019; 16(6):542–545; doi: 10.1089/zeb.2019.1754.
18.
FalkT, MaiD, BenschR, et al.U-Net: Deep learning for cell counting, detection, and morphometry. Nat Methods, 2019; 16(1):67–70; doi: 10.1038/s41592-018-0261-2.
19.
ScherrT, LöfflerK, BöhlandM, et al.Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy. PLoS One, 2020; 15(12):e0243219; doi: 10.1371/journal.pone.0243219.