1. bookVolume 12 (2022): Issue 2 (April 2022)
Journal Details
License
Format
Journal
eISSN
2449-6499
First Published
30 Dec 2014
Publication timeframe
4 times per year
Languages
English
access type Open Access

A Progressive and Cross-Domain Deep Transfer Learning Framework for Wrist Fracture Detection

Published Online: 23 Feb 2022
Volume & Issue: Volume 12 (2022) - Issue 2 (April 2022)
Page range: 101 - 120
Received: 24 May 2021
Accepted: 09 Aug 2021
Journal Details
License
Format
Journal
eISSN
2449-6499
First Published
30 Dec 2014
Publication timeframe
4 times per year
Languages
English
Abstract

There has been an amplified focus on and benefit from the adoption of artificial intelligence (AI) in medical imaging applications. However, deep learning approaches involve training with massive amounts of annotated data in order to guarantee generalization and achieve high accuracies. Gathering and annotating large sets of training images require expertise which is both expensive and time-consuming, especially in the medical field. Furthermore, in health care systems where mistakes can have catastrophic consequences, there is a general mistrust in the black-box aspect of AI models. In this work, we focus on improving the performance of medical imaging applications when limited data is available while focusing on the interpretability aspect of the proposed AI model. This is achieved by employing a novel transfer learning framework, progressive transfer learning, an automated annotation technique and a correlation analysis experiment on the learned representations.

Progressive transfer learning helps jump-start the training of deep neural networks while improving the performance by gradually transferring knowledge from two source tasks into the target task. It is empirically tested on the wrist fracture detection application by first training a general radiology network RadiNet and using its weights to initialize RadiNetwrist, that is trained on wrist images to detect fractures. Experiments show that RadiNetwrist achieves an accuracy of 87% and an AUC ROC of 94% as opposed to 83% and 92% when it is pre-trained on the ImageNet dataset.

This improvement in performance is investigated within an explainable AI framework. More concretely, the learned deep representations of RadiNetwrist are compared to those learned by the baseline model by conducting a correlation analysis experiment. The results show that, when transfer learning is gradually applied, some features are learned earlier in the network. Moreover, the deep layers in the progressive transfer learning framework are shown to encode features that are not encountered when traditional transfer learning techniques are applied.

In addition to the empirical results, a clinical study is conducted and the performance of RadiNetwrist is compared to that of an expert radiologist. We found that RadiNetwrist exhibited similar performance to that of radiologists with more than 20 years of experience.

This motivates follow-up research to train on more data to feasibly surpass radiologists’ performance, and investigate the interpretability of AI models in the healthcare domain where the decision-making process needs to be credible and transparent.

Keywords

[1] W. Cooney, R. Bussey, J. Dobyns, and R. Linscheid, “Difficult wrist fractures. perilunate fracture-dislocations of the wrist.,” Clinical Orthopaedics and Related Research, no. 214, pp. 136–147, 1987.10.1097/00003086-198701000-00020 Search in Google Scholar

[2] R. Lindsey, A. Daluiski, S. Chopra, A. Lachapelle, M. Mozer, S. Sicular, et al., “Deep neural network improves fracture detection by clinicians,” Proceedings of the National Academy of Sciences, vol. 115, no. 45, pp. 11591–11596, 2018. Search in Google Scholar

[3] C. M. Court-Brown and B. Caesar, “Epidemiology of adult fractures: A review,” Injury, vol. 37, pp. 691–697, Aug 2006.10.1016/j.injury.2006.04.13016814787 Search in Google Scholar

[4] C. A. Goldfarb, Y. Yin, L. A. Gilula, A. J. Fisher, and M. I. Boyer, “Wrist fractures: What the clinician wants to know,” Radiology, vol. 219, no. 1, pp. 11–28, 2001. PMID: 11274530.10.1148/radiology.219.1.r01ap131111274530 Search in Google Scholar

[5] H. R. Guly, “Injuries initially misdiagnosed as sprained wrist (beware the sprained wrist),” Emergency Medicine Journal, vol. 19, no. 1, pp. 41–42, 2002.10.1136/emj.19.1.41172578811777870 Search in Google Scholar

[6] B. Petinaux, R. Bhat, K. Boniface, and J. Aristizabal, “Accuracy of radiographic readings in the emergency department,” The American Journal of Emergency Medicine, vol. 29, pp. 18–25, Jan 2011.10.1016/j.ajem.2009.07.01120825769 Search in Google Scholar

[7] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, et al., “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.10.1016/j.media.2017.07.00528778026 Search in Google Scholar

[8] D. Kim and T. MacKinnon, “Artificial intelligence in fracture detection: Transfer learning from deep convolutional neural networks,” Clinical Radiology, vol. 73, 12 2017.10.1016/j.crad.2017.11.01529269036 Search in Google Scholar

[9] J. Olczak, N. Fahlberg, A. Maki, A. Razavian, A. Jilert, A. Stark, et al., “Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms—are they on par with humans for diagnosing fractures?,” Acta Orthopaedica, vol. 88, pp. 1–6, 07 2017.10.1080/17453674.2017.1344459569480028681679 Search in Google Scholar

[10] R. Lindsey, A. Daluiski, S. Chopra, A. Lachapelle, M. Mozer, S. Sicular, et al., “Deep neural network improves fracture detection by clinicians,” Proceedings of the National Academy of Sciences, vol. 115, no. 45, pp. 11591–11596, 2018. Search in Google Scholar

[11] D. Soekhoe, P. van der Putten, and A. Plaat, “On the impact of data set size in transfer learning using deep neural networks,” Lecture Notes in Computer Science Advances in Intelligent Data Analysis XV, pp. 50–60, 2016.10.1007/978-3-319-46349-0_5 Search in Google Scholar

[12] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016. Search in Google Scholar

[13] B. Q. Huynh, H. Li, and M. L. Giger, “Digital mammographic tumor classification using transfer learning from deep convolutional neural networks,” Journal of Medical Imaging, vol. 3, no. 3, p. 034501, 2016. Search in Google Scholar

[14] A. Van Opbroek, M. A. Ikram, M. W. Vernooij, and M. De Bruijne, “Transfer learning improves supervised image segmentation across imaging protocols,” IEEE transactions on medical imaging, vol. 34, no. 5, pp. 1018–1030, 2014. Search in Google Scholar

[15] V. Christen, A. Groß, and E. Rahm, “Approaches for annotating medical documents.,” in LWDA, pp. 227–232, 2016. Search in Google Scholar

[16] P. Klassen, F. Xia, and M. Yetisgen-Yildiz, “Annotating and detecting medical events in clinical notes,” in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pp. 3417–3421, 2016. Search in Google Scholar

[17] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255, Ieee, 2009.10.1109/CVPR.2009.5206848 Search in Google Scholar

[18] C. Karam, J. El Zini, and M. Awad, “X-ray wrist fracture classification,” 2019. Search in Google Scholar

[19] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015. Search in Google Scholar

[20] P. Lakhani and B. Sundaram, “Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks,” Radiology, vol. 284, no. 2, pp. 574–582, 2017.10.1148/radiol.201716232628436741 Search in Google Scholar

[21] M. P. McBee, O. A. Awan, A. T. Colucci, C. W. Ghobadi, N. Kadom, A. P. Kansagra, et al., “Deep learning in radiology,” Academic radiology, vol. 25, no. 11, pp. 1472–1480, 2018. Search in Google Scholar

[22] J. H. Thrall, X. Li, Q. Li, C. Cruz, S. Do, K. Dreyer, et al., “Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success,” Journal of the American College of Radiology, vol. 15, no. 3, pp. 504–508, 2018.10.1016/j.jacr.2017.12.02629402533 Search in Google Scholar

[23] M. A. Mazurowski, M. Buda, A. Saha, and M. R. Bashir, “Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on mri,” Journal of Magnetic Resonance Imaging, vol. 49, no. 4, pp. 939–954, 2019.10.1002/jmri.26534648340430575178 Search in Google Scholar

[24] A. S. Becker, M. Marcon, S. Ghafoor, M. C. Wurnig, T. Frauenfelder, and A. Boss, “Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer,” Investigative radiology, vol. 52, no. 7, pp. 434–440, 2017.10.1097/RLI.000000000000035828212138 Search in Google Scholar

[25] J. Wang, X. Yang, H. Cai, W. Tan, C. Jin, and L. Li, “Discrimination of breast cancer with microcalcifications on mammography by deep learning,” Scientific reports, vol. 6, p. 27327, 2016. Search in Google Scholar

[26] D. Ribli, A. Horváth, Z. Unger, P. Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Scientific reports, vol. 8, no. 1, p. 4165, 2018.10.1038/s41598-018-22437-z585466829545529 Search in Google Scholar

[27] M. Araya-Polo, J. Jennings, A. Adler, and T. Dahlke, “Deep-learning tomography,” The Leading Edge, vol. 37, no. 1, pp. 58–66, 2018.10.1190/tle37010058.1 Search in Google Scholar

[28] K.-L. Hua, C.-H. Hsu, S. C. Hidayati, W.-H. Cheng, and Y.-J. Chen, “Computer-aided classification of lung nodules on computed tomography images via deep learning technique,” OncoTargets and therapy, vol. 8, 2015. Search in Google Scholar

[29] T. Würfl, F. C. Ghesu, V. Christlein, and A. Maier, “Deep learning computed tomography,” in International conference on medical image computing and computer-assisted intervention, pp. 432–440, Springer, 2016.10.1007/978-3-319-46726-9_50 Search in Google Scholar

[30] H. Zhang, L. Li, K. Qiao, L. Wang, B. Yan, L. Li, et al., “Image prediction for limited-angle tomography via deep learning with convolutional neural network,” arXiv preprint arXiv:1607.08707, 2016. Search in Google Scholar

[31] M. H. Yap, G. Pons, J. Martí, S. Ganau, M. Sentís, R. Zwiggelaar, et al., “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEE journal of biomedical and health informatics, vol. 22, no. 4, pp. 1218–1226, 2017. Search in Google Scholar

[32] K. Lekadir, A. Galimzianova,À. Betriu, M. del Mar Vila, L. Igual, D. L. Rubin, et al., “A convolutional neural network for automatic characterization of plaque composition in carotid ultrasound,” IEEE journal of biomedical and health informatics, vol. 21, no. 1, pp. 48–55, 2016.10.1109/JBHI.2016.2631401529362227893402 Search in Google Scholar

[33] P. Burlina, S. Billings, N. Joshi, and J. Albayda, “Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods,” PloS one, vol. 12, no. 8, p. e0184059, 2017.10.1371/journal.pone.0184059557667728854220 Search in Google Scholar

[34] P. H. Kalmet, S. Sanduleanu, S. Primakov, G. Wu, A. Jochems, T. Refaee, A. Ibrahim, L. v. Hulst, P. Lambin, and M. Poeze, “Deep learning in fracture detection: a narrative review,” Acta orthopaedica, vol. 91, no. 2, pp. 215–220, 2020.10.1080/17453674.2019.1711323714427231928116 Search in Google Scholar

[35] R. M. Jones, A. Sharma, R. Hotchkiss, J. W. Sperling, J. Hamburger, C. Ledig, R. O’Toole, M. Gardner, S. Venkatesh, M. M. Roberts, et al., “Assessment of a deep-learning system for fracture detection in musculoskeletal radiographs,” NPJ digital medicine, vol. 3, no. 1, pp. 1–6, 2020.10.1038/s41746-020-00352-w759920833145440 Search in Google Scholar

[36] A. M. Raisuddin, E. Vaattovaara, M. Nevalainen, M. Nikki, E. J¨arvenp¨a¨a, K. Makkonen, P. Pinola, T. Palsio, A. Niemensivu, O. Tervonen, et al., “Critical evaluation of deep neural networks for wrist fracture detection,” Scientific reports, vol. 11, no. 1, pp. 1–11, 2021.10.1038/s41598-021-85570-2797104833727668 Search in Google Scholar

[37] B. Guan, G. Zhang, J. Yao, X. Wang, and M. Wang, “Arm fracture detection in x-rays based on improved deep convolutional neural network,” Computers & Electrical Engineering, vol. 81, p. 106530, 2020. Search in Google Scholar

[38] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. knowledge and data engineering, vol. 22, no. 10, pp. 1345–1359, 2010. Search in Google Scholar

[39] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: transfer learning from unlabeled data,” in Proceedings of the 24th international conference on Machine learning, pp. 759–766, ACM, 2007.10.1145/1273496.1273592 Search in Google Scholar

[40] H. Ravishankar, P. Sudhakar, R. Venkataramani, S. Thiruvenkadam, P. Annangi, N. Babu, et al., “Understanding the mechanisms of deep transfer learning for medical images,” in Deep Learning and Data Labeling for Medical Applications, pp. 188–196, Springer, 2016.10.1007/978-3-319-46976-8_20 Search in Google Scholar

[41] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, et al., “Convolutional neural networks for medical image analysis: Full training or fine tuning?,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1299–1312, 2016. Search in Google Scholar

[42] B. J. Erickson, P. Korfiatis, Z. Akkus, and T. L. Kline, “Machine learning for medical imaging,” Radiographics, vol. 37, no. 2, pp. 505–515, 2017.10.1148/rg.2017160130537562128212054 Search in Google Scholar

[43] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012. Search in Google Scholar

[44] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017. Search in Google Scholar

[45] S. Ren, K. He, R. Girshick, and J. Sun, “Faster rcnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015. Search in Google Scholar

[46] T. Urakawa, Y. Tanaka, H. Matsuzawa, K. Watanabe, and N. Endo, “Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network,” Journal of the International Skeletal Society A Journal of Radiology, Pathology and Orthopedics, vol. 42, pp. 239–244, 2019.10.1007/s00256-018-3016-329955910 Search in Google Scholar

[47] P. Rajpurkar, J. Irvin, A. Bagul, D. Ding, T. Duan, H. Mehta, et al., “Mura dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs,” 2017. cite arxiv:1712.06957. Search in Google Scholar

[48] K. Gan, D. Xu, Y. Lin, Y. Shen, T. Zhang, K. Hu, et al., “Artificial intelligence detection of distal radius fractures: a comparison between the convolutional neural network and professional assessments,” Acta orthopaedica, pp. 1–12, 2019.10.1080/17453674.2019.1600125671819030942136 Search in Google Scholar

[49] J. de Matos, A. de Souza Britto Jr., L. E. S. Oliveira, and A. L. Koerich, “Double transfer learning for breast cancer histopathologic image classification,” CoRR, vol. abs/1904.07834, 2019. Search in Google Scholar

[50] S. Christodoulidis, M. Anthimopoulos, L. Ebner, A. Christe, and S. G. Mougiakakou, “Multi-source transfer learning with convolutional neural networks for lung pattern analysis,” CoRR, vol. abs/1612.02589, 2016. Search in Google Scholar

[51] J. Li, W. Wu, D. Xue, and P. Gao, “Multi-source deep transfer neural network algorithm,” Sensors (Basel, Switzerland), vol. 19, p. 3992, Sep 2019. 31527437[pmid].10.3390/s19183992676784731527437 Search in Google Scholar

[52] R. Gupta and L.-A. Ratinov, “Text categorization with knowledge transfer from heterogeneous data sources,” in AAAI, pp. 842–847, 2008. Search in Google Scholar

[53] Z. Yu, Z. Jin, L. Wei, J. Guo, J. Huang, D. Cai, X. He, and X.-S. Hua, “Progressive transfer learning for person re-identification,” Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Aug 2019.10.24963/ijcai.2019/586 Search in Google Scholar

[54] W. Hu, Y. Jin, X. Wu, and J. Chen, “Progressive transfer learning for low frequency data prediction in full waveform inversion,” 2019. Search in Google Scholar

[55] Y. Gu, Z. Ge, C. P. Bonnington, and J. Zhou, “Progressive transfer learning and adversarial domain adaptation for cross-domain skin disease classification,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 5, pp. 1379–1393, 2020. Search in Google Scholar

[56] J. Antolík, “Automatic annotation of medical records,” Studies in health technology and informatics, vol. 116, pp. 817–822, 2005. Search in Google Scholar

[57] C. Ganoe, W. Wu, P. Barr, W. Haslett, M. Dannenberg, K. Bonasia, J. Finora, J. Schoonmaker, W. Onsando, J. Ryan, et al., “Natural language processing for automated annotation of medication mentions in primary care visit conversations,” medRxiv, 2021.10.1101/2021.03.29.21254488 Search in Google Scholar

[58] H. Li, B. Zhang, Y. Zhang, W. Liu, Y. Mao, J. Huang, and L. Wei, “A semi-automated annotation algorithm based on weakly supervised learning for medical images,” Biocybernetics and Biomedical Engineering, vol. 40, no. 2, pp. 787–802, 2020.10.1016/j.bbe.2020.03.005 Search in Google Scholar

[59] R. Bouslimi and J. Akaichi, “New approach for automatic medical image annotation using the bagof-words model,” in 2015 6th IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 1088–1093, 2015. Search in Google Scholar

[60] T. Gong, S. Li, J. Wang, C. L. Tan, B. Pang, T. Lim, C. Lee, Q. Tian, and Z. Zhang, “Automatic labeling and classification of brain ct images,” pp. 1581–1584, 09 2011. Search in Google Scholar

[61] A. R. Aronson and F.-M. Lang, “An overview of metamap: historical perspective and recent advances,” Journal of the American Medical Informatics Association : JAMIA, vol. 17, no. 3, pp. 229–236, 2010. PMC2995713[pmcid].10.1136/jamia.2009.002733299571320442139 Search in Google Scholar

[62] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” CoRR, vol. abs/1311.2901, 2013. Search in Google Scholar

[63] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” arXiv preprint arXiv:1703.01365, 2017. Search in Google Scholar

[64] M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High-precision model-agnostic explanations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018. Search in Google Scholar

[65] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object detectors emerge in deep scene cnns,” 2015. Search in Google Scholar

[66] B. Zhou, A. Khosla, L. A., A. Oliva, and A. Torralba, “Learning deep features for discriminative localization.,” CVPR, 2016.10.1109/CVPR.2016.319 Search in Google Scholar

[67] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Mar 2018.10.1109/WACV.2018.00097 Search in Google Scholar

[68] B. N. Patro, M. Lunayach, S. Patel, and V. P. Namboodiri, “U-cam: Visual explanation using uncertainty based class activation maps,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 7444–7453, 2019. Search in Google Scholar

[69] M. T. Ribeiro, S. Singh, and C. Guestrin, “”why should i trust you?”: Explaining the predictions of any classifier,” 2016.10.1145/2939672.2939778 Search in Google Scholar

[70] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6541–6549, 2017. Search in Google Scholar

[71] R. Fong and A. Vedaldi, “Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8730–8738, 2018. Search in Google Scholar

[72] K. Leino, S. Sen, A. Datta, M. Fredrikson, and L. Li, “Influence-directed explanations for deep convolutional networks,” in 2018 IEEE International Test Conference (ITC), pp. 1–8, IEEE, 2018.10.1109/TEST.2018.8624792 Search in Google Scholar

[73] A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196, 2015. Search in Google Scholar

[74] A. Dosovitskiy and T. Brox, “Inverting visual representations with convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4829–4837, 2016. Search in Google Scholar

[75] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PLoS ONE, vol. 10, 2015.10.1371/journal.pone.0130140449875326161953 Search in Google Scholar

[76] M. Böhle, F. Eitel, M. Weygandt, and K. Ritter, “Layer-wise relevance propagation for explaining deep neural network decisions in mri-based alzheimer’s disease classification,” Frontiers in aging neuroscience, vol. 11, pp. 194–194, Jul 2019. 31417397[pmid].10.3389/fnagi.2019.00194668508731417397 Search in Google Scholar

[77] F. Eitel, E. Soehler, J. Bellmann-Strobl, A. U. Brandt, K. Ruprecht, R. M. Giess, J. Kuchling, S. Asseyer, M. Weygandt, J.-D. Haynes, M. S. l, F. Paul, and K. Ritter, “Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional mri using layer-wise relevance propagation,” NeuroImage: Clinical, vol. 24, p. 102003, 2019. Search in Google Scholar

[78] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, “Canonical correlation analysis: An overview with application to learning methods,” Neural computation, vol. 16, no. 12, pp. 2639–2664, 2004. Search in Google Scholar

[79] D. Sussillo, M. M. Churchland, M. T. Kaufman, and K. V. Shenoy, “A neural network that finds a naturalistic solution for the production of muscle activity,” Nature neuroscience, vol. 18, no. 7, pp. 1025–1033, 2015. Search in Google Scholar

[80] M. Faruqui and C. Dyer, “Improving vector space word representations using multilingual correlation,” in Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 462–471, 2014.10.3115/v1/E14-1049 Search in Google Scholar

[81] M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein, “Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability,” in Advances in Neural Information Processing Systems, pp. 6076–6085, 2017. Search in Google Scholar

[82] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013. Search in Google Scholar

[83] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” in Advances in neural information processing systems, pp. 3320–3328, 2014. Search in Google Scholar

[84] C. Castillo, T. Steffens, L. Sim, and L. Caffery, “The effect of clinical information on radiology reporting: A systematic review,” Journal of Medical Radiation Sciences, vol. 68, no. 1, pp. 60–74, 2021.10.1002/jmrs.424789092332870580 Search in Google Scholar

[85] Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints, vol. abs/1605.02688, May 2016. Search in Google Scholar

[86] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” CoRR, vol. abs/1608.06993, 2016. Search in Google Scholar

Recommended articles from Trend MD

Plan your remote conference with Sciendo