Paper
25 March 2023 A study of Japanese sign language recognition using human skeleton data
Alyssa Takazume, Noriko Yata, Yoshitsugu Manabe
Author Affiliations +
Proceedings Volume 12592, International Workshop on Advanced Imaging Technology (IWAIT) 2023; 125920X (2023) https://doi.org/10.1117/12.2664992
Event: International Workshop on Advanced Imaging Technology (IWAIT) 2023, 2023, Jeju, Korea, Republic of
Abstract
Sign language is a visual language that uses hand signs, arm movements and facial expressions to convey information. However, learning sign language is challenging task, due to variety of movements that should be executed precisely to express the correct information. Many studies have been made to recognize and translate the sign languages. Traditional methods face problems when used in a real-world situation due to the background and lighting variations. Recently, the result obtained from research using multi-modal data, including human skeleton data to Sign Language Recognition task (SLR), have archived a remarkable success.1 This paper aims to make a model that uses only skeleton data to recognize sign language using Graph Convolutional Network (GCN) and make a Japanese Sign Language dataset to help future SLR research.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alyssa Takazume, Noriko Yata, and Yoshitsugu Manabe "A study of Japanese sign language recognition using human skeleton data", Proc. SPIE 12592, International Workshop on Advanced Imaging Technology (IWAIT) 2023, 125920X (25 March 2023); https://doi.org/10.1117/12.2664992
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Data modeling

Deep learning

Education and training

Convolution

Feature extraction

Analytical research

Back to Top