Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing

In the paper a two-layer encoder is proposed. The nodes of encoder under consideration are neo-fuzzy neurons, which are characterised by high speed of learning process and effective approximation properties. The proposed architecture of neo-fuzzy encoder has a two-layer bottle neck” structure and it...

Full description

Bibliographic Details
Main Authors: Bodyanskiy Yevgeniy, Pliss Iryna, Vynokurova Olena, Peleshko Dmytro, Rashkevych Yuriy
Format: Article
Language:English
Published: Sciendo 2017-12-01
Series:Information Technology and Management Science
Subjects:
Online Access:http://www.degruyter.com/view/j/itms.2017.20.issue-1/itms-2017-0001/itms-2017-0001.xml?format=INT
id doaj-8ae702f509d34b36b9b1cb156b9a45ff
record_format Article
spelling doaj-8ae702f509d34b36b9b1cb156b9a45ff2021-04-02T05:56:58ZengSciendoInformation Technology and Management Science2255-90942017-12-0120161110.1515/itms-2017-0001itms-2017-0001Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data ProcessingBodyanskiy Yevgeniy0Pliss Iryna1Vynokurova Olena2Peleshko Dmytro3Rashkevych Yuriy4Kharkiv National University of Radio Electronics, Harkiv, UkraineKharkiv National University of Radio Electronics, Harkiv, UkraineKharkiv National University of Radio Electronics, Kharkiv, UkraineIT Step University, Liov, UkraineMinistry of Education and Science of Ukraine, Kiev, UkraineIn the paper a two-layer encoder is proposed. The nodes of encoder under consideration are neo-fuzzy neurons, which are characterised by high speed of learning process and effective approximation properties. The proposed architecture of neo-fuzzy encoder has a two-layer bottle neck” structure and its learning algorithm is based on error backpropagation. The learning algorithm is characterised by a high rate of convergence because the output signals of encoder’s nodes (neo-fuzzy neurons) are linearly dependent on the tuning parameters. The proposed learning algorithm can tune both the synaptic weights and centres of membership functions. Thus, in the paper the hybrid neo-fuzzy system-encoder is proposed that has essential advantages over conventional neurocompressors.http://www.degruyter.com/view/j/itms.2017.20.issue-1/itms-2017-0001/itms-2017-0001.xml?format=INTArtificial neural networkscomputational intelligencedata compressionmachine learning
collection DOAJ
language English
format Article
sources DOAJ
author Bodyanskiy Yevgeniy
Pliss Iryna
Vynokurova Olena
Peleshko Dmytro
Rashkevych Yuriy
spellingShingle Bodyanskiy Yevgeniy
Pliss Iryna
Vynokurova Olena
Peleshko Dmytro
Rashkevych Yuriy
Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
Information Technology and Management Science
Artificial neural networks
computational intelligence
data compression
machine learning
author_facet Bodyanskiy Yevgeniy
Pliss Iryna
Vynokurova Olena
Peleshko Dmytro
Rashkevych Yuriy
author_sort Bodyanskiy Yevgeniy
title Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
title_short Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
title_full Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
title_fullStr Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
title_full_unstemmed Neo-Fuzzy Encoder and Its Adaptive Learning for Big Data Processing
title_sort neo-fuzzy encoder and its adaptive learning for big data processing
publisher Sciendo
series Information Technology and Management Science
issn 2255-9094
publishDate 2017-12-01
description In the paper a two-layer encoder is proposed. The nodes of encoder under consideration are neo-fuzzy neurons, which are characterised by high speed of learning process and effective approximation properties. The proposed architecture of neo-fuzzy encoder has a two-layer bottle neck” structure and its learning algorithm is based on error backpropagation. The learning algorithm is characterised by a high rate of convergence because the output signals of encoder’s nodes (neo-fuzzy neurons) are linearly dependent on the tuning parameters. The proposed learning algorithm can tune both the synaptic weights and centres of membership functions. Thus, in the paper the hybrid neo-fuzzy system-encoder is proposed that has essential advantages over conventional neurocompressors.
topic Artificial neural networks
computational intelligence
data compression
machine learning
url http://www.degruyter.com/view/j/itms.2017.20.issue-1/itms-2017-0001/itms-2017-0001.xml?format=INT
work_keys_str_mv AT bodyanskiyyevgeniy neofuzzyencoderanditsadaptivelearningforbigdataprocessing
AT plissiryna neofuzzyencoderanditsadaptivelearningforbigdataprocessing
AT vynokurovaolena neofuzzyencoderanditsadaptivelearningforbigdataprocessing
AT peleshkodmytro neofuzzyencoderanditsadaptivelearningforbigdataprocessing
AT rashkevychyuriy neofuzzyencoderanditsadaptivelearningforbigdataprocessing
_version_ 1724172206364164096