Expressing emotions through vibration for perception and control

This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional informati...

Full description

Bibliographic Details
Main Author: ur Réhman, Shafiq
Format: Doctoral Thesis
Language:English
Published: Umeå universitet, Institutionen för tillämpad fysik och elektronik 2010
Subjects:
HCI
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990
http://nbn-resolving.de/urn:isbn:978-91-7264-978-1
id ndltd-UPSALLA1-oai-DiVA.org-umu-32990
record_format oai_dc
collection NDLTD
language English
format Doctoral Thesis
sources NDLTD
topic Multimodal Signal Processing
Mobile Communication
Vibrotactile Rendering
Locally Linear Embedding
Object Detection
Human Facial Expression Analysis
Lip Tracking
Object Tracking
HCI
Expectation-Maximization Algorithm
Lipless Tracking
Image Analysis
Visually Impaired.
Signal processing
Signalbehandling
Image analysis
Bildanalys
Computer science
Datavetenskap
Telecommunication
Telekommunikation
Systems engineering
Systemteknik
spellingShingle Multimodal Signal Processing
Mobile Communication
Vibrotactile Rendering
Locally Linear Embedding
Object Detection
Human Facial Expression Analysis
Lip Tracking
Object Tracking
HCI
Expectation-Maximization Algorithm
Lipless Tracking
Image Analysis
Visually Impaired.
Signal processing
Signalbehandling
Image analysis
Bildanalys
Computer science
Datavetenskap
Telecommunication
Telekommunikation
Systems engineering
Systemteknik
ur Réhman, Shafiq
Expressing emotions through vibration for perception and control
description This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction. === Taktil Video
author ur Réhman, Shafiq
author_facet ur Réhman, Shafiq
author_sort ur Réhman, Shafiq
title Expressing emotions through vibration for perception and control
title_short Expressing emotions through vibration for perception and control
title_full Expressing emotions through vibration for perception and control
title_fullStr Expressing emotions through vibration for perception and control
title_full_unstemmed Expressing emotions through vibration for perception and control
title_sort expressing emotions through vibration for perception and control
publisher Umeå universitet, Institutionen för tillämpad fysik och elektronik
publishDate 2010
url http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990
http://nbn-resolving.de/urn:isbn:978-91-7264-978-1
work_keys_str_mv AT urrehmanshafiq expressingemotionsthroughvibrationforperceptionandcontrol
AT urrehmanshafiq expressingemotionsthroughvibration
_version_ 1716508704544653312
spelling ndltd-UPSALLA1-oai-DiVA.org-umu-329902013-01-08T13:06:09ZExpressing emotions through vibration for perception and controlengExpressing emotions through vibrationur Réhman, ShafiqUmeå universitet, Institutionen för tillämpad fysik och elektronikUmeå : Umeå universitet, Institutionen för tillämpad fysik och elektronik2010Multimodal Signal ProcessingMobile CommunicationVibrotactile RenderingLocally Linear EmbeddingObject DetectionHuman Facial Expression AnalysisLip TrackingObject TrackingHCIExpectation-Maximization AlgorithmLipless TrackingImage AnalysisVisually Impaired.Signal processingSignalbehandlingImage analysisBildanalysComputer scienceDatavetenskapTelecommunicationTelekommunikationSystems engineeringSystemteknikThis thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction. Taktil VideoDoctoral thesis, comprehensive summaryinfo:eu-repo/semantics/doctoralThesistexthttp://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990urn:isbn:978-91-7264-978-1Digital Media Lab, 1652-6295 ; 12application/pdfinfo:eu-repo/semantics/openAccess