Challenges for the Repeatability of Deep Learning Models
Deep learning training typically starts with a random sampling initialization approach to set the weights of trainable layers. Therefore, different and/or uncontrolled weight initialization prevents learning the same model multiple times. Consequently, such models yield different results during test...
Main Authors: | Saeed S. Alahmari, Dmitry B. Goldgof, Peter R. Mouton, Lawrence O. Hall |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9266043/ |
Similar Items
-
Tolkning av handskrivna siffror i formulär : Betydelsen av datauppsättningens storlek vid maskininlärning
by: Kirik, Engin
Published: (2021) -
QPLaBSE: Quantized and Pruned Language-Agnostic BERT Sentence Embedding Model : Production-ready compression for multilingual transformers
by: Langde, Sarthak
Published: (2021) -
Review and comparative analysis of machine learning libraries for machine learning
by: Migran N. Gevorkyan, et al.
Published: (2019-12-01) -
ASVtorch toolkit: Speaker verification with deep neural networks
by: Kong Aik Lee, et al.
Published: (2021-06-01) -
hyper-sinh: An accurate and reliable function from shallow to deep learning in TensorFlow and Keras
by: Luca Parisi, et al.
Published: (2021-12-01)