Abstract
Understanding the signal propagation properties in a neural network is not only essential for improving its trainability but it also sheds some light on the robustness properties of neural networks. In this project, we study the effects of signal propagation on both of these aspects in neural networks including the compressed (either sparse or quantized) networks.
Publications
Improved Gradient based Adversarial Attacks for Quantized Networks.
Kartik Gupta, and
Thalaiyasingam Ajanthan.
Association for the Advancement of Artificial Intelligence (AAAI), November 2022.
[
pdf]
[
arxiv]
[
bib]
@article{gupta_iga_aaai22,
author = {Gupta, Kartik and Ajanthan, Thalaiyasingam},
title = {Improved Gradient based Adversarial Attacks for Quantized Networks},
journal = {AAAI},
year = {2022}
}
Bidirectional Self-Normalizing Neural Networks.
Yao Lu, Stephen Gould, and
Thalaiyasingam Ajanthan.
Neural Networks, August 2023.
[
pdf]
[
arxiv]
[
talk]
[
bib]
@article{lu_bsnn_nn23,
author = {Lu, Yao and Gould, Stephen and Ajanthan, Thalaiyasingam},
title = {Bidirectional Self-Normalizing Neural Networks},
journal = {NN},
year = {2023}
}
A Signal Propagation Perspective for Pruning Neural Networks at Initialization.
Namhoon Lee,
Thalaiyasingam Ajanthan, Stephen Gould, and Philip H. S. Torr.
International Conference on Learning Representations (ICLR), April 2020.
(spotlight)
[
pdf]
[
arxiv]
[
talk]
[
code]
[
bib]
@article{lee_disnip_iclr20,
author = {Lee, Namhoon and Ajanthan, Thalaiyasingam and Gould, Stephen and Torr, Philip HS},
title = {A Signal Propagation Perspective for Pruning Neural Networks at Initialization},
journal = {ICLR},
year = {2020}
}