Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) ha...
Main Authors: | Boon Giin Lee, Teak-Wei Chong, Wan-Young Chung |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-11-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/21/6256 |
Similar Items
-
American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach
by: Teak-Wei Chong, et al.
Published: (2018-10-01) -
A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System
by: Haibin Yu, et al.
Published: (2019-01-01) -
British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language
by: Jordan J. Bird, et al.
Published: (2020-09-01) -
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
by: Francisco Javier Ordóñez, et al.
Published: (2016-01-01) -
Deep Human Activity Recognition With Localisation of Wearable Sensors
by: Isah A. Lawal, et al.
Published: (2020-01-01)