• Türkçe
    • English
  • Türkçe 
    • Türkçe
    • English
  • Giriş
Öğe Göster 
  •   DSpace@Amasya
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • WoS İndeksli Yayınlar Koleksiyonu
  • Öğe Göster
  •   DSpace@Amasya
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • WoS İndeksli Yayınlar Koleksiyonu
  • Öğe Göster
JavaScript is disabled for your browser. Some features of this site may not work without it.

Focal modulation network for lung segmentation in chest X-ray images

Erişim

info:eu-repo/semantics/closedAccess

Tarih

2023

Yazar

Ozturk, Saban
Cukur, Tolga

Üst veri

Tüm öğe kaydını göster

Özet

Segmentation of lung regions is of key importance for the automatic analysis of Chest X-Ray (CXR) images, which have a vital role in the detection of various pulmonary diseases. Precise identification of lung regions is the basic prerequisite for disease diagnosis , treatment planning. However, achieving precise lung segmentation poses significant challenges due to factors such as variations in anatomical shape and size, the presence of strong edges at the rib cage and clavicle , overlapping anatomical structures resulting from diverse diseases. Although commonly considered as the de-facto standard in medical image segmentation, the convolutional UNet architecture and its variants fall short in addressing these challenges, primarily due to the limited ability to model long-range dependencies between image features. While vision transformers equipped with self-attention mechanisms excel at capturing long-range relationships, either a coarse-grained global self-attention or a fine-grained local self-attention is typically adopted for segmentation tasks on high-resolution images to alleviate quadratic computational cost at the expense of performance loss. This paper introduces a focal modulation UNet model (FMN-UNet) to enhance segmentation performance by effectively aggregating fine-grained local and coarse-grained global relations at a reasonable computational cost. FMN-UNet first encodes CXR images via a convolutional encoder to suppress background regions and extract latent feature maps at a relatively modest resolution. FMN-UNet then leverages global and local attention mechanisms to model contextual relationships across the images. These contextual feature maps are convolutionally decoded to produce segmentation masks. The segmentation performance of FMN-UNet is compared against state-of-the-art methods on three public CXR datasets (JSRT, Montgomery, and Shenzhen). Experiments in each dataset demonstrate the superior performance of FMN-UNet against baselines.

Cilt

31

Sayı

6

Bağlantı

https://doi.org/10.55730/1300-0632.4031
https://search.trdizin.gov.tr/yayin/detay/1208560
https://hdl.handle.net/20.500.12450/2779

Koleksiyonlar

  • Scopus İndeksli Yayınlar Koleksiyonu [1574]
  • TR-Dizin İndeksli Yayınlar Koleksiyonu [1323]
  • WoS İndeksli Yayınlar Koleksiyonu [2182]



DSpace software copyright © 2002-2015  DuraSpace
İletişim | Geri Bildirim
Theme by 
@mire NV
 

 




| Yönerge | Rehber | İletişim |

DSpace@Amasya

by OpenAIRE
Gelişmiş Arama

sherpa/romeo

Göz at

Tüm DSpaceBölümler & KoleksiyonlarTarihe GöreYazara GöreBaşlığa GöreKonuya GöreTüre GöreBölüme GöreYayıncıya GöreKategoriye GöreDile GöreErişim ŞekliBu KoleksiyonTarihe GöreYazara GöreBaşlığa GöreKonuya GöreTüre GöreBölüme GöreYayıncıya GöreKategoriye GöreDile GöreErişim Şekli

Hesabım

GirişKayıt

DSpace software copyright © 2002-2015  DuraSpace
İletişim | Geri Bildirim
Theme by 
@mire NV
 

 


|| Yönerge || Rehber || Kütüphane || Amasya Üniversitesi || OAI-PMH ||

Amasya Üniversitesi Kütüphane ve Dokümantasyon Daire Başkanlığı, Amasya, Turkey
İçerikte herhangi bir hata görürseniz, lütfen bildiriniz: openaccess@amasya.edu.tr

Creative Commons License
DSpace@Amasya by Amasya University Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..

DSpace@Amasya: