Improving Speaker Recognition by Biometric Voice Deconstruction

Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g. YouTube) to broadcast its message. In this new scenario, classical identifi...

Full description

Bibliographic Details
Main Authors: Luis Miguel eMazaira-Fernández, Agustín eÁlvarez-Marquina, Pedro eGomez-Vilda
Format: Article
Language:English
Published: Frontiers Media S.A. 2015-09-01
Series:Frontiers in Bioengineering and Biotechnology
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fbioe.2015.00126/full
Description
Summary:Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g. YouTube) to broadcast its message. In this new scenario, classical identification methods (such fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. Through the present paper, a new methodology to characterize speakers will be shown. This methodology is benefiting from the advances achieved during the last years in understanding and modelling voice production. The paper hypothesizes that a gender dependent characterization of speakers combined with the use of a new set of biometric parameters extracted from the components resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract gender-dependent extended biometric parameters are given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.
ISSN:2296-4185