Abstract
The objective of neural network quantization is to learn networks while their parameters (and activations) are constrained to take values in a small discrete set, usually binary, which is highly desirable for resource-limitted devices and/or real-time applications. In this project, we develop new algorithms for neural network quantization and make steps towards understanding the generalization and robustness properties of such quantized networks.
Publications
Improved Gradient based Adversarial Attacks for Quantized Networks.
Kartik Gupta, and
Thalaiyasingam Ajanthan.
Association for the Advancement of Artificial Intelligence (AAAI), November 2022.
[
pdf]
[
arxiv]
[
bib]
@article{gupta_iga_aaai22,
author = {Gupta, Kartik and Ajanthan, Thalaiyasingam},
title = {Improved Gradient based Adversarial Attacks for Quantized Networks},
journal = {AAAI},
year = {2022}
}
Mirror Descent View for Neural Network Quantization.
Thalaiyasingam Ajanthan*, Kartik Gupta*, Philip H. S. Torr, Richard Hartley, and Puneet K. Dokania.
International Conference on Artificial Intelligence and Statistics (AISTATS), April 2021.
[
pdf]
[
supp]
[
arxiv]
[
code]
[
bib]
@article{ajanthan_mdnnq_aistats21,
author = {Ajanthan, Thalaiyasingam, and Gupta, Kartik, and Torr, Philip HS, and Hartley, Richard and Dokania, Puneet K},
title = {Mirror Descent View for Neural Network Quantization},
journal = {AISTATS},
year = {2021}
}
Proximal Mean-field for Neural Network Quantization.
Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, and Philip H. S. Torr.
International Conference on Computer Vision (ICCV), October 2019.
[
pdf]
[
supp]
[
arxiv]
[
poster]
[
talk]
[
code]
[
bib]
@article{ajanthan_pmf_iccv19,
author = {Ajanthan, Thalaiyasingam, and Dokania, Puneet K and Hartley, Richard and Torr, Philip HS},
title = {Proximal Mean-field for Neural Network Quantization},
journal = {ICCV},
year = {2019}
}