Abstract

Understanding the signal propagation properties in a neural network is not only essential for improving its trainability but it also sheds some light on the robustness properties of neural networks. In this project, we study the effects of signal propagation on both of these aspects in neural networks including the compressed (either sparse or quantized) networks.

Publications

Improved Gradient based Adversarial Attacks for Quantized Networks.
Kartik Gupta, and Thalaiyasingam Ajanthan.
Association for the Advancement of Artificial Intelligence (AAAI), November 2022.
[pdf] [arxiv] [bib]

Bidirectional Self-Normalizing Neural Networks.
Yao Lu, Stephen Gould, and Thalaiyasingam Ajanthan.
Neural Networks, August 2023.
[pdf] [arxiv] [talk] [bib]

A Signal Propagation Perspective for Pruning Neural Networks at Initialization.
Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip H. S. Torr.
International Conference on Learning Representations (ICLR), April 2020. (spotlight)
[pdf] [arxiv] [talk] [code] [bib]