dc.contributor.author | Ozturk, Saban | |
dc.contributor.author | Alhudhaif, Adi | |
dc.contributor.author | Polat, Kemal | |
dc.date.accessioned | 2024-03-12T19:34:41Z | |
dc.date.available | 2024-03-12T19:34:41Z | |
dc.date.issued | 2021 | |
dc.identifier.issn | 1300-0632 | |
dc.identifier.issn | 1303-6203 | |
dc.identifier.uri | https://doi.org/10.3906/elk-2105-242 | |
dc.identifier.uri | https://search.trdizin.gov.tr/yayin/detay/526850 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12450/2688 | |
dc.description.abstract | The widespread use of medical imaging devices allows deep analysis of diseases. However, the task of examining medical images increases the burden of specialist doctors. Computer-assisted systems provide an effective management tool that enables these images to be analyzed automatically. Although these tools are used for various purposes, today, they are moving towards retrieval systems to access increasing data quickly. In hospitals, the need for content-based image retrieval systems is seriously evident in order to store all images effectively and access them quickly when necessary. In this study, an attention-based end-to-end convolutional neural network (CNN)framework that can provide effective access to similar images from a large X-ray dataset is presented. In the first part of the proposed framework, a fully convolutional network architecture with attention structures is presented. This section contains several layers for determining the saliency points of X-ray images. In the second part of the framework, the modified image with X-ray saliency map is converted to representative codes in Euclidean space by the ResNet-18 architecture. Finally, hash codes are obtained by transforming these codes into hamming spaces. The proposed study is superior in terms of high performance and customized layers compared to current state-of-the-art X-ray image retrieval methods in the literature. Extensive experimental studies reveal that the proposed framework can increase the current precision performance by up to 13 | en_US |
dc.description.sponsorship | Scientific and Technological Research Council of Turkey (TUBITAK) [120E018] | en_US |
dc.description.sponsorship | This research is funded by Scientific and Technological Research Council of Turkey (TUBITAK) under grant number 120E018. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Tubitak Scientific & Technological Research Council Turkey | en_US |
dc.relation.ispartof | Turkish Journal Of Electrical Engineering And Computer Sciences | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | X-ray | en_US |
dc.subject | attention | en_US |
dc.subject | retrieval | en_US |
dc.subject | hash | en_US |
dc.subject | CNN | en_US |
dc.title | Attention-based end-to-end CNN framework for content-based X-ray image retrieval | en_US |
dc.type | article | en_US |
dc.department | Amasya Üniversitesi | en_US |
dc.authorid | Alhudhaif, Adi/0000-0002-7201-6963 | |
dc.authorid | Öztürk, Şaban/0000-0003-2371-8173 | |
dc.authorid | Polat, Kemal/0000-0003-1840-9958; | |
dc.identifier.volume | 29 | en_US |
dc.identifier.startpage | 2680 | en_US |
dc.identifier.endpage | 2693 | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.identifier.scopus | 2-s2.0-85117241123 | en_US |
dc.identifier.trdizinid | 526850 | en_US |
dc.identifier.doi | 10.3906/elk-2105-242 | |
dc.department-temp | [Ozturk, Saban] Amasya Univ, Dept Elect & Elect Engn, Amasya, Turkey; [Alhudhaif, Adi] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci Al Kharj, Dept Comp Sci, Al Kharj, Saudi Arabia; [Polat, Kemal] Abant Izzet Baysal Univ, Dept Elect & Elect Engn, Bolu, Turkey | en_US |
dc.identifier.wos | WOS:000706889700002 | en_US |
dc.authorwosid | Alhudhaif, Adi/AAN-6541-2021 | |
dc.authorwosid | Öztürk, Şaban/ABI-3936-2020 | |
dc.authorwosid | Alhudhaif, Adi/AAF-1937-2021 | |
dc.authorwosid | Polat, Kemal/AGZ-2143-2022 | |