Page 74 - eBook_Proceedings of the International Conference on Digital Manufacturing V2
P. 74

Proceedings of the International Conference on Digital Manufacturing –
                                         Volume 2

                  The mean pixel accuracy across all foreground classes
               (0.8238) highlights the model’s  capability to reliably classify
               pixels  belonging to cervical cells of interest. However, the
               significantly lower  mean, Intersection over Union (IoU),
               calculated at 0.0592, underscores a  limitation  in precise
               delineation of cellular boundaries, particularly where overlapping
               and ambiguous cell boundaries are prevalent. Such challenges are
               typical in cytological  imagery and highlight  the inherent
               complexity of accurate segmentation tasks. The confusion matrix
               analysis in Figure 24 offers critical visualisation of classification
               performance, clearly illustrating confusion patterns, especially
               among Dyskeratotic, Koilocytotic, and Metaplastic classes. This
               pattern  of confusion  identifies  targeted areas  for  potential
               improvements. Future  research could include  employing
               specialised data  augmentation techniques, refining  annotation
               protocols for  greater boundary  precision,  and adopting loss
               functions tailored to handle significant class imbalances, such as
               focal loss.

                  In summary, the Swin  Transformer-based semantic
               segmentation model has demonstrated considerable potential for
               clinical adoption in cervical cancer screening,  characterised by
               high recall and pixel accuracy for the most clinically critical cell
               categories. However,  addressing lower precision  and boundary
               delineation accuracy through targeted  improvements and
               advanced methodological strategies will be essential  for
               enhancing the clinical utility and diagnostic reliability of the
               model.

               Visualisation of Segmentation Results

               To qualitatively assess the performance of the proposed model,
               sample segmentation outputs are presented in Figure 27. These
               visualisations depict the model's ability to localise and classify
               various  cervical  cell  types  based  on  their  morphological
               characteristics. For improved interpretability, each predicted class
               is assigned a distinct colour:






                                              58
   69   70   71   72   73   74   75   76   77   78   79