A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2014-10-01
|
Series: | Sensors |
Subjects: | |
Online Access: | http://www.mdpi.com/1424-8220/14/10/18670 |
id |
doaj-b1c7da0042c5416da698b224088359ff |
---|---|
record_format |
Article |
spelling |
doaj-b1c7da0042c5416da698b224088359ff2020-11-25T00:09:25ZengMDPI AGSensors1424-82202014-10-011410186701870010.3390/s141018670s141018670A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile ModelQing Lin0Youngjoon Han1Electronic Engineering Department, Soongsil University, 511 Sangdo-Dong, Dongjak-Gu, Seoul 156-743, KoreaElectronic Engineering Department, Soongsil University, 511 Sangdo-Dong, Dongjak-Gu, Seoul 156-743, KoreaA wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility.http://www.mdpi.com/1424-8220/14/10/18670electronic mobility aidssensor fusionobject detectionBayesian networkcontext-aware guidancemultimodal information transformation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Qing Lin Youngjoon Han |
spellingShingle |
Qing Lin Youngjoon Han A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model Sensors electronic mobility aids sensor fusion object detection Bayesian network context-aware guidance multimodal information transformation |
author_facet |
Qing Lin Youngjoon Han |
author_sort |
Qing Lin |
title |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model |
title_short |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model |
title_full |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model |
title_fullStr |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model |
title_full_unstemmed |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model |
title_sort |
context-aware-based audio guidance system for blind people using a multimodal profile model |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2014-10-01 |
description |
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. |
topic |
electronic mobility aids sensor fusion object detection Bayesian network context-aware guidance multimodal information transformation |
url |
http://www.mdpi.com/1424-8220/14/10/18670 |
work_keys_str_mv |
AT qinglin acontextawarebasedaudioguidancesystemforblindpeopleusingamultimodalprofilemodel AT youngjoonhan acontextawarebasedaudioguidancesystemforblindpeopleusingamultimodalprofilemodel AT qinglin contextawarebasedaudioguidancesystemforblindpeopleusingamultimodalprofilemodel AT youngjoonhan contextawarebasedaudioguidancesystemforblindpeopleusingamultimodalprofilemodel |
_version_ |
1725411942237995008 |