Summary: | In recent years, deep neural networks have shown to achieve state-of-the-art performance on several classification and prediction tasks. However, these networks demand undesirable lengthy training times coupled with high computational resources (memory, I/O, processing time). In this work, we explore semi-random deep neural networks to achieve near real-time training and less computational resource usage. Although many works enhance the underlying hardware for real-time training, this work focuses on algorithmic optimization. It is shown that random projection networks with additional skipped connectivity and randomly weighted layers can boost the overall network performance while enabling for real-time training. Additionally, a tensor-train decomposition technique is leveraged to further reduce the model complexity of these networks. Our investigation accomplishes the following: 1) Tensor-train decomposition decreases the complexity of random projection networks, 2) compression of the fully connected hidden layer leads to a minimum <inline-formula><tex-math notation="LaTeX">$\sim40\times$</tex-math></inline-formula> decrease in memory size, and 3) training under random projection networks can be achieved in near-real time.
|