Summary: | Bayesian neural networks allow us to keep track of uncertainties, for example
in top tagging, by learning a tagger output together with an error band. We
illustrate the main features of Bayesian versions of established deep-learning
taggers. We show how they capture statistical uncertainties from finite
training samples, systematics related to the jet energy scale, and stability
issues through pile-up. Altogether, Bayesian networks offer many new handles to
understand and control deep learning at the LHC without introducing a visible
prior effect and without compromising the network performance.
|