How Amdahl’s Law limits the performance of large artificial neural networks

Abstract With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial i...

Full description

Bibliographic Details
Main Author: János Végh
Format: Article
Language:English
Published: SpringerOpen 2019-04-01
Series:Brain Informatics
Subjects:
Online Access:http://link.springer.com/article/10.1186/s40708-019-0097-2
Description
Summary:Abstract With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial intelligence applications, demanding an exponentially increasing amount of computing capacity. However, the inherently parallel operation of the neural networks is mostly simulated deploying inherently sequential (or in the best case: sequential–parallel) computing elements. The paper shows that neural network simulators, (both software and hardware ones), akin to all other sequential–parallel computing systems, have computing performance limitation due to deploying clock-driven electronic circuits, the 70-year old computing paradigm and Amdahl’s Law about parallelized computing systems. The findings explain the limitations/saturation experienced in former studies.
ISSN:2198-4018
2198-4026