Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features

In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extractin...

Full description

Bibliographic Details
Main Authors: Alvaro Joaquin Gaona, Pedro David Arini
Format: Article
Language:English
Published: Universidad de Buenos Aires 2020-12-01
Series:Revista Elektrón
Subjects:
Online Access:http://elektron.fi.uba.ar/index.php/elektron/article/view/101
Description
Summary:In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extracting instantaneous frequency that will play an important role in the training and testing of the proposed model. More specifically, it involves an Long Short-Term Memory (LSTM) neural network accompanied by the Fourier Synchrosqueezed Transform (FSST) used to extract instantaneous time-frequency features from a PCG. The present approach was tested on heart sound signals longer than 5 seconds and shorter than 35 seconds from freely-available databases. This approach proved that, with a relatively small architecture, a small set of data and the right features, this method achieved an almost state-of-the-art performance, showing an average sensitivity of 89.5%, an average positive predictive value of 89.3% and an average accuracy of 91.3%.
ISSN:2525-0159