Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational dem...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2019-04-01
|
Series: | Frontiers in Neuroinformatics |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fninf.2019.00019/full |
id |
doaj-3f927c6753c94aee84dc49196c1b1749 |
---|---|
record_format |
Article |
spelling |
doaj-3f927c6753c94aee84dc49196c1b17492020-11-24T21:45:59ZengFrontiers Media S.A.Frontiers in Neuroinformatics1662-51962019-04-011310.3389/fninf.2019.00019412681Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve ScalabilityCarlos Fernandez-Musoles0Daniel Coca1Paul Richmond2Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United KingdomAutomatic Control and Systems Engineering, University of Sheffield, Sheffield, United KingdomComputer Science, University of Sheffield, Sheffield, United KingdomIn the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.https://www.frontiersin.org/article/10.3389/fninf.2019.00019/fullSpiking Neural Networksdistributed simulationhypergraph partitioningdynamic sparse data exchangeHPC |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Carlos Fernandez-Musoles Daniel Coca Paul Richmond |
spellingShingle |
Carlos Fernandez-Musoles Daniel Coca Paul Richmond Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability Frontiers in Neuroinformatics Spiking Neural Networks distributed simulation hypergraph partitioning dynamic sparse data exchange HPC |
author_facet |
Carlos Fernandez-Musoles Daniel Coca Paul Richmond |
author_sort |
Carlos Fernandez-Musoles |
title |
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability |
title_short |
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability |
title_full |
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability |
title_fullStr |
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability |
title_full_unstemmed |
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability |
title_sort |
communication sparsity in distributed spiking neural network simulations to improve scalability |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Neuroinformatics |
issn |
1662-5196 |
publishDate |
2019-04-01 |
description |
In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network. |
topic |
Spiking Neural Networks distributed simulation hypergraph partitioning dynamic sparse data exchange HPC |
url |
https://www.frontiersin.org/article/10.3389/fninf.2019.00019/full |
work_keys_str_mv |
AT carlosfernandezmusoles communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability AT danielcoca communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability AT paulrichmond communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability |
_version_ |
1725902906850279424 |