Distributed training and scalability for the particle clustering method UCluster

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clusteri...

Full description

Bibliographic Details
Main Authors: Sunneborn Gudnadottir Olga, Gedon Daniel, Desmarais Colin, Bengtsson Bernander Karl, Sainudiin Raazesh, Gonzalez Suarez Rebeca
Format: Article
Language:English
Published: EDP Sciences 2021-01-01
Series:EPJ Web of Conferences
Online Access:https://www.epj-conferences.org/articles/epjconf/pdf/2021/05/epjconf_chep2021_02054.pdf
id doaj-a7214520223c4bde8eb98ad28d4af638
record_format Article
spelling doaj-a7214520223c4bde8eb98ad28d4af6382021-08-26T09:27:32ZengEDP SciencesEPJ Web of Conferences2100-014X2021-01-012510205410.1051/epjconf/202125102054epjconf_chep2021_02054Distributed training and scalability for the particle clustering method UClusterSunneborn Gudnadottir Olga0Gedon Daniel1Desmarais Colin2Bengtsson Bernander Karl3Sainudiin RaazeshGonzalez Suarez Rebeca4Department of Physics and Astronomy, Division of High Energy Physics, Uppsala UniversityDepartment of Information Technology, Division of Systems and Control, Uppsala UniversityDepartment of Mathematics, Uppsala UniversityDepartment of Information Technology, Division of Visual Information & Interaction, Uppsala UniversityDepartment of Physics and Astronomy, Division of High Energy Physics, Uppsala UniversityIn recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.https://www.epj-conferences.org/articles/epjconf/pdf/2021/05/epjconf_chep2021_02054.pdf
collection DOAJ
language English
format Article
sources DOAJ
author Sunneborn Gudnadottir Olga
Gedon Daniel
Desmarais Colin
Bengtsson Bernander Karl
Sainudiin Raazesh
Gonzalez Suarez Rebeca
spellingShingle Sunneborn Gudnadottir Olga
Gedon Daniel
Desmarais Colin
Bengtsson Bernander Karl
Sainudiin Raazesh
Gonzalez Suarez Rebeca
Distributed training and scalability for the particle clustering method UCluster
EPJ Web of Conferences
author_facet Sunneborn Gudnadottir Olga
Gedon Daniel
Desmarais Colin
Bengtsson Bernander Karl
Sainudiin Raazesh
Gonzalez Suarez Rebeca
author_sort Sunneborn Gudnadottir Olga
title Distributed training and scalability for the particle clustering method UCluster
title_short Distributed training and scalability for the particle clustering method UCluster
title_full Distributed training and scalability for the particle clustering method UCluster
title_fullStr Distributed training and scalability for the particle clustering method UCluster
title_full_unstemmed Distributed training and scalability for the particle clustering method UCluster
title_sort distributed training and scalability for the particle clustering method ucluster
publisher EDP Sciences
series EPJ Web of Conferences
issn 2100-014X
publishDate 2021-01-01
description In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.
url https://www.epj-conferences.org/articles/epjconf/pdf/2021/05/epjconf_chep2021_02054.pdf
work_keys_str_mv AT sunneborngudnadottirolga distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
AT gedondaniel distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
AT desmaraiscolin distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
AT bengtssonbernanderkarl distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
AT sainudiinraazesh distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
AT gonzalezsuarezrebeca distributedtrainingandscalabilityfortheparticleclusteringmethoducluster
_version_ 1721195815125385216