Visual feature inter-learning for sign language recognition in emergency medicine
CSTR:
Author:
Affiliation:

1. Tianjin Emergency Center, Tianjin 300011, China;2. School of Computer Science, Nanjing University of Information Science & Technology, Nanjing 210044, China

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Accessible communication based on sign language recognition (SLR) is the key to emergency medical assistance for the hearing-impaired community. Balancing the capture of both local and global information in SLR for emergency medicine poses a significant challenge. To address this, we propose a novel approach based on the inter-learning of visual features between global and local information. Specifically, our method enhances the perception capabilities of the visual feature extractor by strategically leveraging the strengths of convolutional neural network (CNN), which are adept at capturing local features, and visual transformers which perform well at perceiving global features. Furthermore, to mitigate the issue of overfitting caused by the limited availability of sign language data for emergency medical applications, we introduce an enhanced short temporal module for data augmentation through additional subsequences. Experimental results on three publicly available sign language datasets demonstrate the efficacy of the proposed approach.

    Reference
    Related
    Cited by
Get Citation

WEI Chao, LI Yunpeng, LIU Jingze. Visual feature inter-learning for sign language recognition in emergency medicine[J]. Optoelectronics Letters,2025,(10):619-625

Copy
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:September 03,2024
  • Revised:March 12,2025
  • Adopted:
  • Online: September 22,2025
  • Published:
Article QR Code