Basit öğe kaydını göster

dc.contributor.authorOzturk, Saban
dc.contributor.authorAlhudhaif, Adi
dc.contributor.authorPolat, Kemal
dc.date.accessioned2024-03-12T19:34:41Z
dc.date.available2024-03-12T19:34:41Z
dc.date.issued2021
dc.identifier.issn1300-0632
dc.identifier.issn1303-6203
dc.identifier.urihttps://doi.org/10.3906/elk-2105-242
dc.identifier.urihttps://search.trdizin.gov.tr/yayin/detay/526850
dc.identifier.urihttps://hdl.handle.net/20.500.12450/2688
dc.description.abstractThe widespread use of medical imaging devices allows deep analysis of diseases. However, the task of examining medical images increases the burden of specialist doctors. Computer-assisted systems provide an effective management tool that enables these images to be analyzed automatically. Although these tools are used for various purposes, today, they are moving towards retrieval systems to access increasing data quickly. In hospitals, the need for content-based image retrieval systems is seriously evident in order to store all images effectively and access them quickly when necessary. In this study, an attention-based end-to-end convolutional neural network (CNN)framework that can provide effective access to similar images from a large X-ray dataset is presented. In the first part of the proposed framework, a fully convolutional network architecture with attention structures is presented. This section contains several layers for determining the saliency points of X-ray images. In the second part of the framework, the modified image with X-ray saliency map is converted to representative codes in Euclidean space by the ResNet-18 architecture. Finally, hash codes are obtained by transforming these codes into hamming spaces. The proposed study is superior in terms of high performance and customized layers compared to current state-of-the-art X-ray image retrieval methods in the literature. Extensive experimental studies reveal that the proposed framework can increase the current precision performance by up to 13en_US
dc.description.sponsorshipScientific and Technological Research Council of Turkey (TUBITAK) [120E018]en_US
dc.description.sponsorshipThis research is funded by Scientific and Technological Research Council of Turkey (TUBITAK) under grant number 120E018.en_US
dc.language.isoengen_US
dc.publisherTubitak Scientific & Technological Research Council Turkeyen_US
dc.relation.ispartofTurkish Journal Of Electrical Engineering And Computer Sciencesen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectX-rayen_US
dc.subjectattentionen_US
dc.subjectretrievalen_US
dc.subjecthashen_US
dc.subjectCNNen_US
dc.titleAttention-based end-to-end CNN framework for content-based X-ray image retrievalen_US
dc.typearticleen_US
dc.departmentAmasya Üniversitesien_US
dc.authoridAlhudhaif, Adi/0000-0002-7201-6963
dc.authoridÖztürk, Şaban/0000-0003-2371-8173
dc.authoridPolat, Kemal/0000-0003-1840-9958;
dc.identifier.volume29en_US
dc.identifier.startpage2680en_US
dc.identifier.endpage2693en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.scopus2-s2.0-85117241123en_US
dc.identifier.trdizinid526850en_US
dc.identifier.doi10.3906/elk-2105-242
dc.department-temp[Ozturk, Saban] Amasya Univ, Dept Elect & Elect Engn, Amasya, Turkey; [Alhudhaif, Adi] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci Al Kharj, Dept Comp Sci, Al Kharj, Saudi Arabia; [Polat, Kemal] Abant Izzet Baysal Univ, Dept Elect & Elect Engn, Bolu, Turkeyen_US
dc.identifier.wosWOS:000706889700002en_US
dc.authorwosidAlhudhaif, Adi/AAN-6541-2021
dc.authorwosidÖztürk, Şaban/ABI-3936-2020
dc.authorwosidAlhudhaif, Adi/AAF-1937-2021
dc.authorwosidPolat, Kemal/AGZ-2143-2022


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster