Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning

Machine-learning algorithms can be fooled by small well-designed adversarial perturbations. This is reminiscent of cellular decision-making where ligands (called antagonists) prevent correct signaling, like in early immune recognition. We draw a formal analogy between neural networks used in machine...

Full description

Bibliographic Details
Main Authors: Thomas J. Rademaker, Emmanuel Bengio, Paul François
Format: Article
Language:English
Published: American Physical Society 2019-07-01
Series:Physical Review X
Online Access:http://doi.org/10.1103/PhysRevX.9.031012
id doaj-9a749feef29f4e8b8ccf96c26fb4e25d
record_format Article
spelling doaj-9a749feef29f4e8b8ccf96c26fb4e25d2020-11-25T01:52:55ZengAmerican Physical SocietyPhysical Review X2160-33082019-07-019303101210.1103/PhysRevX.9.031012Attack and Defense in Cellular Decision-Making: Lessons from Machine LearningThomas J. RademakerEmmanuel BengioPaul FrançoisMachine-learning algorithms can be fooled by small well-designed adversarial perturbations. This is reminiscent of cellular decision-making where ligands (called antagonists) prevent correct signaling, like in early immune recognition. We draw a formal analogy between neural networks used in machine learning and models of cellular decision-making (adaptive proofreading). We apply attacks from machine learning to simple decision-making models and show explicitly the correspondence to antagonism by weakly bound ligands. Such antagonism is absent in more nonlinear models, which inspires us to implement a biomimetic defense in neural networks filtering out adversarial perturbations. We then apply a gradient-descent approach from machine learning to different cellular decision-making models, and we reveal the existence of two regimes characterized by the presence or absence of a critical point for the gradient. This critical point causes the strongest antagonists to lie close to the decision boundary. This is validated in the loss landscapes of robust neural networks and cellular decision-making models, and observed experimentally for immune cells. For both regimes, we explain how associated defense mechanisms shape the geometry of the loss landscape and why different adversarial attacks are effective in different regimes. Our work connects evolved cellular decision-making to machine learning and motivates the design of a general theory of adversarial perturbations, both for in vivo and in silico systems.http://doi.org/10.1103/PhysRevX.9.031012
collection DOAJ
language English
format Article
sources DOAJ
author Thomas J. Rademaker
Emmanuel Bengio
Paul François
spellingShingle Thomas J. Rademaker
Emmanuel Bengio
Paul François
Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
Physical Review X
author_facet Thomas J. Rademaker
Emmanuel Bengio
Paul François
author_sort Thomas J. Rademaker
title Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
title_short Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
title_full Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
title_fullStr Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
title_full_unstemmed Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
title_sort attack and defense in cellular decision-making: lessons from machine learning
publisher American Physical Society
series Physical Review X
issn 2160-3308
publishDate 2019-07-01
description Machine-learning algorithms can be fooled by small well-designed adversarial perturbations. This is reminiscent of cellular decision-making where ligands (called antagonists) prevent correct signaling, like in early immune recognition. We draw a formal analogy between neural networks used in machine learning and models of cellular decision-making (adaptive proofreading). We apply attacks from machine learning to simple decision-making models and show explicitly the correspondence to antagonism by weakly bound ligands. Such antagonism is absent in more nonlinear models, which inspires us to implement a biomimetic defense in neural networks filtering out adversarial perturbations. We then apply a gradient-descent approach from machine learning to different cellular decision-making models, and we reveal the existence of two regimes characterized by the presence or absence of a critical point for the gradient. This critical point causes the strongest antagonists to lie close to the decision boundary. This is validated in the loss landscapes of robust neural networks and cellular decision-making models, and observed experimentally for immune cells. For both regimes, we explain how associated defense mechanisms shape the geometry of the loss landscape and why different adversarial attacks are effective in different regimes. Our work connects evolved cellular decision-making to machine learning and motivates the design of a general theory of adversarial perturbations, both for in vivo and in silico systems.
url http://doi.org/10.1103/PhysRevX.9.031012
work_keys_str_mv AT thomasjrademaker attackanddefenseincellulardecisionmakinglessonsfrommachinelearning
AT emmanuelbengio attackanddefenseincellulardecisionmakinglessonsfrommachinelearning
AT paulfrancois attackanddefenseincellulardecisionmakinglessonsfrommachinelearning
_version_ 1715637628602155008