id |
ndltd-NEU--neu-bz60xp273
|
record_format |
oai_dc
|
spelling |
ndltd-NEU--neu-bz60xp2732021-06-03T05:15:32ZReal-Time grasp type estimation for a robotic prosthetic handFor lower arm amputees, prosthetic hands promise to restore most of physical interaction capabilities. This requires to accurately predict hand gestures capable of grabbing varying objects and execute them timely as intended by the user. Current approaches often rely on physiological signal inputs such as Electromyography (EMG) signal from residual limb muscles to infer the intended motion. However, limited signal quality, user diversity and high variability adversely affect the system robustness. Instead of solely relying on EMG signals, our work enables augmenting EMG intent inference with physical state probability through machine learning and computer vision method. To this end, we: (i) study state-of-the-art deep neural network architectures to select a performant sources of knowledge transfer for the prosthetic hand; (ii) use a dataset containing object images and probability distribution of grasp types as a new form of labeling where instead of using absolute values of zero and one as the conventional classification labels, our labels are a set of probabilities whose sum is 1. The proposed method generates probabilistic predictions which could be fused with EMG prediction of probabilities over grasps by using the visual information from the palm camera of a prosthetic hand. Moreover, As robotic prosthetic hands are targeted for amputees with the goal of assisting them for their daily life activities, it is crucial to have a portable and reliable system. Although embedded devices employed in such systems, provide portability and comfort for the end user, their limited computational resources comparing to a desktop or server computer impose longer latencies when executing such applications, making them unreliable and generally impractical to use. Therefore, it is critical to optimize the aforementioned applications especially DNNs to meet the specified deadline, resulting in a real-time system. Therefore, for real-time execution of grasp estimation we propose: (iii) the concept of layer removal as a means of constructing TRimmed Networks (TRNs) that are based on removing problem-specific features of a pretrained network used in transfer learning, and (iv) NetCut, a methodology based on an empirical or an analytical latency estimator, which only proposes and retrains TRNs that can meet the application's deadline, hence reducing the exploration time significantly. We demonstrate that TRNs can expand the Pareto frontier that trades off latency and accuracy to provide networks that can meet arbitrary deadlines with potential accuracy improvement over off-the-shelf networks. Our experimental results show that such utilization of TRNs, while transferring to a simpler dataset, in combination with NetCut, can lead to the proposal of networks that can achieve relative accuracy improvement of up to 10.43% among existing off-the-shelf neural architectures while meeting a specific deadline, and 27x speedup in exploration time. The proposed methods in this work enable robust and realistic prediction of the grasp type as well as real-time execution of the detection pipeline, resulting in the improved overall satisfaction of the targeted population. --Author's abstracthttp://hdl.handle.net/2047/D20410366
|
collection |
NDLTD
|
sources |
NDLTD
|
description |
For lower arm amputees, prosthetic hands promise to restore most of physical interaction capabilities. This requires to accurately predict hand gestures capable of grabbing varying objects and execute them timely as intended by the user. Current approaches often rely on physiological signal inputs such as Electromyography (EMG) signal from residual limb muscles to infer the intended motion. However, limited signal quality, user diversity and high variability adversely affect the system robustness. Instead of solely relying on EMG signals, our work enables augmenting EMG intent inference with physical state probability through machine learning and computer vision method. To this end, we: (i) study state-of-the-art deep neural network architectures to select a performant sources of knowledge transfer for the prosthetic hand; (ii) use a dataset containing object images and probability distribution of grasp types as a new form of labeling where instead of using absolute values of zero and one as the conventional classification labels, our labels are a set of probabilities whose sum is 1. The proposed method generates probabilistic predictions which could be fused with EMG prediction of probabilities over grasps by using the visual information from the palm camera of a prosthetic hand. Moreover, As robotic prosthetic hands are targeted for amputees with the goal of assisting them for their daily life activities, it is crucial to have a portable and reliable system. Although embedded devices employed in such systems, provide portability and comfort for the end user, their limited computational resources comparing to a desktop or server computer impose longer latencies when executing such applications, making them unreliable and generally impractical to use. Therefore, it is critical to optimize the aforementioned applications especially DNNs to meet the specified deadline, resulting in a real-time system. Therefore, for real-time execution of grasp estimation we propose: (iii) the concept of layer removal as a means of constructing TRimmed Networks (TRNs) that are based on removing problem-specific features of a pretrained network used in transfer learning, and (iv) NetCut, a methodology based on an empirical or an analytical latency estimator, which only proposes and retrains TRNs that can meet the application's deadline, hence reducing the exploration time significantly. We demonstrate that TRNs can expand the Pareto frontier that trades off latency and accuracy to provide networks that can meet arbitrary deadlines with potential accuracy improvement over off-the-shelf networks. Our experimental results show that such utilization of TRNs, while transferring to a simpler dataset, in combination with NetCut, can lead to the proposal of networks that can achieve relative accuracy improvement of up to 10.43% among existing off-the-shelf neural architectures while meeting a specific deadline, and 27x speedup in exploration time. The proposed methods in this work enable robust and realistic prediction of the grasp type as well as real-time execution of the detection pipeline, resulting in the improved overall satisfaction of the targeted population. --Author's abstract
|
title |
Real-Time grasp type estimation for a robotic prosthetic hand
|
spellingShingle |
Real-Time grasp type estimation for a robotic prosthetic hand
|
title_short |
Real-Time grasp type estimation for a robotic prosthetic hand
|
title_full |
Real-Time grasp type estimation for a robotic prosthetic hand
|
title_fullStr |
Real-Time grasp type estimation for a robotic prosthetic hand
|
title_full_unstemmed |
Real-Time grasp type estimation for a robotic prosthetic hand
|
title_sort |
real-time grasp type estimation for a robotic prosthetic hand
|
publishDate |
|
url |
http://hdl.handle.net/2047/D20410366
|
_version_ |
1719408380846538752
|