Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking

A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The method is based on dividing a link into smaller sections. A smaller section is then compensated by the DL-DBP algorithm and the same trained model is then reapplied to the subsequent s...

Full description

Bibliographic Details
Main Authors: Baeuerle, B. (Author), Bitachon, B.I (Author), Eppenberger, M. (Author), Leuthold, J. (Author)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2022
Subjects:
Online Access:View Fulltext in Publisher
Description
Summary:A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The method is based on dividing a link into smaller sections. A smaller section is then compensated by the DL-DBP algorithm and the same trained model is then reapplied to the subsequent sections. We show in a 32 GBd 16QAM 2400 km 5-channel wavelength division multiplexing transmission link experiment that the proposed stacked DL-DBPs provides a 0.41 dB gain with respect to linear compensation scheme. This needs to be compared with a 0.56 dB gain achieved by a non-stacked DL-DBPs compensated scheme for the price of a 203% increase in total training time. Furthermore, it is shown that by only training the last section of the stacked DL-DBP, one can increase the compensation performance to 0.48 dB. © 1989-2012 IEEE.
Physical Description:4
ISBN:10411135 (ISSN)
DOI:10.1109/LPT.2022.3162157