An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language
碩士 === 世新大學 === 資訊管理學研究所(含碩專班) === 108 === Deaf people or dumb people can’t hear or speak, and they usually use sign language to communicate with one another. However, most people are unfamiliar with sign language. Lately, Convolutional Neural Network (CNN) of deep learning has been demonstrated to...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/6484zw |
id |
ndltd-TW-107SHU00396063 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-107SHU003960632019-09-07T03:30:34Z http://ndltd.ncl.edu.tw/handle/6484zw An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language 運用深度學習辨識手語手勢之創新研究 SU, YU-CHEN 蘇宇楨 碩士 世新大學 資訊管理學研究所(含碩專班) 108 Deaf people or dumb people can’t hear or speak, and they usually use sign language to communicate with one another. However, most people are unfamiliar with sign language. Lately, Convolutional Neural Network (CNN) of deep learning has been demonstrated to provide superior performance in image recognition by inputting a large amount of labelled training data. In order to increase the public’s understanding to sign language, this thesis uses deep learning CNN model to recognize Mandarin Phonetic Symbols of sign language which contains 37 different gestures. The first part was to build 720x402 pixels, an average 250 to 300 images of each right-hand shape by smart phone, and there were totally 10,598 images built as a database. The second part was to use these images with 96x96x3 inputting LeNet and SmallVGGNet of CNN model as training, and then adjust the batch size for experiments. The third part was to compare recognition accuracy results of model, and the best training data results are over 96%. Then by testing the model recognition accuracy with different background gesture images, the results get up to 51.35% of LeNet and 29.73% of SmallVGGNet. This thesis built 37 gesture images of Mandarin Phonetic Symbols with single background to train CNN model. The resulting model can be tested with up to 51.35% recognition accuracy with different background gesture images. The experimental process and results can be applied to further studies, and move forward on the language translations between sign and spoken language. LIAW, HORNG-TWU 廖鴻圖 2019 學位論文 ; thesis 101 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 世新大學 === 資訊管理學研究所(含碩專班) === 108 === Deaf people or dumb people can’t hear or speak, and they usually use sign language to communicate with one another. However, most people are unfamiliar with sign language. Lately, Convolutional Neural Network (CNN) of deep learning has been demonstrated to provide superior performance in image recognition by inputting a large amount of labelled training data. In order to increase the public’s understanding to sign language, this thesis uses deep learning CNN model to recognize Mandarin Phonetic Symbols of sign language which contains 37 different gestures.
The first part was to build 720x402 pixels, an average 250 to 300 images of each right-hand shape by smart phone, and there were totally 10,598 images built as a database. The second part was to use these images with 96x96x3 inputting LeNet and SmallVGGNet of CNN model as training, and then adjust the batch size for experiments. The third part was to compare recognition accuracy results of model, and the best training data results are over 96%. Then by testing the model recognition accuracy with different background gesture images, the results get up to 51.35% of LeNet and 29.73% of SmallVGGNet.
This thesis built 37 gesture images of Mandarin Phonetic Symbols with single background to train CNN model. The resulting model can be tested with up to 51.35% recognition accuracy with different background gesture images. The experimental process and results can be applied to further studies, and move forward on the language translations between sign and spoken language.
|
author2 |
LIAW, HORNG-TWU |
author_facet |
LIAW, HORNG-TWU SU, YU-CHEN 蘇宇楨 |
author |
SU, YU-CHEN 蘇宇楨 |
spellingShingle |
SU, YU-CHEN 蘇宇楨 An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
author_sort |
SU, YU-CHEN |
title |
An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
title_short |
An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
title_full |
An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
title_fullStr |
An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
title_full_unstemmed |
An Innovation Study on Applying Deep Learning to Recognize Gesture in Sign Language |
title_sort |
innovation study on applying deep learning to recognize gesture in sign language |
publishDate |
2019 |
url |
http://ndltd.ncl.edu.tw/handle/6484zw |
work_keys_str_mv |
AT suyuchen aninnovationstudyonapplyingdeeplearningtorecognizegestureinsignlanguage AT sūyǔzhēn aninnovationstudyonapplyingdeeplearningtorecognizegestureinsignlanguage AT suyuchen yùnyòngshēndùxuéxíbiànshíshǒuyǔshǒushìzhīchuàngxīnyánjiū AT sūyǔzhēn yùnyòngshēndùxuéxíbiànshíshǒuyǔshǒushìzhīchuàngxīnyánjiū AT suyuchen innovationstudyonapplyingdeeplearningtorecognizegestureinsignlanguage AT sūyǔzhēn innovationstudyonapplyingdeeplearningtorecognizegestureinsignlanguage |
_version_ |
1719244236370477056 |