Souvik Kundu
Souvik Kundu
Research Scientist, Intel Labs; Ph.D - University of Southern California
Verified email at - Homepage
Cited by
Cited by
Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression
S Kundu, G Datta, M Pedram, PA Beerel
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2021 …, 2021
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training With Crafted Input Noise
S Kundu, M Pedram, PA Beerel
IEEE/CVF International Conference on Computer Vision (ICCV 2021), 5209-5218, 2021
DNR: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs
S Kundu, M Nazemi, PA Beerel, M Pedram
Proceedings of the 26th ASP-DAC 2021, 344-350, 2021
Pre-Defined Sparsity for Low-Complexity Convolutional Neural Networks
S Kundu, M Nazemi, M Pedram, KM Chugg, PA Beerel
IEEE Transactions on Computers 2020 69 (7), 1045-1058, 2020
Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
G Datta, S Kundu, PA Beerel
IJCNN 2021, 2021
P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications
S Kundu, G Datta, Z Yin, RT Lakkireddy, PA Beerel, A Jacob, ARE Jaiswal
Nature Scientific Reports 2022, 2022
Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference
S Kundu, S Lu, Y Zhang, J Liu, PA Beerel
International Conference on Learning Representation (ICLR) 2023., 2023
ACE-SNN: Algorithm-Hardware Co-Design of Energy-efficient & Low-Latency Deep Spiking Neural Networks for 3D Image Recognition
G Datta, S Kundu, A Jaiswal, PA Beerel
Frontiers in Neuroscience, 2022
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
S Kundu, Q Sun, Y Fu, M Pedram, P Beerel
Advances in Neural Information Processing Systems (NeurIPS 2021) 34, 2021
AttentionLite: Towards Efficient Self-Attention Models for Vision
S Kundu, S Sundaresan
ICASSP 2021, 2021
Pipeedge: Pipeline parallelism for large-scale model inference on heterogeneous edge devices
Y Hu, C Imes, X Zhao, S Kundu, PA Beerel, SP Crago, JP Walters
2022 25th Euromicro Conference on Digital System Design (DSD), 298-307, 2022
Revisiting Sparsity Hunting in Federated Learning: Why does Sparsity Consensus Matter?
S Kundu*, S Babakniya*, S Prakash, Y Niu, S Avestimehr
Transactions on Machine Learning Research (TMLR), 2023
A highly parallel FPGA implementation of sparse neural network training
S Dey, D Chen, Z Li, S Kundu, KW Huang, KM Chugg, PA Beerel
2018 International Conference on ReConFigurable Computing and FPGAs …, 2018
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression
S Kundu, G Datta, M Pedram, PA Beerel
2nd Sparse Neural Networks Workshop (co-located with ICML 2022), 2021
pSConv: A Pre-defined S parse Kernel Based Convolution for Deep CNNs
S Kundu, S Prakash, H Akrami, PA Beerel, KM Chugg
2019 57th Annual Allerton Conference on Communication, Control, and …, 2019
Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference
S Kundu, Y Zhang, D Chen, P Beerel
CVPR 2023 Efficient Computer Vision (ORAL), 2023
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of DNNs from Scratch
S Kundu, S Wang, Q Sun, PA Beerel, M Pedram
Design Automation and Test in Europe (DATE) 2022, 2021
ViTA: A Vision Transformer Inference Accelerator for Edge Applications
S Nag, G Datta, S Kundu, N Chandrachoodan, PA Beerel
ISCAS 2023, 2023
Toward Adversary-aware Non-iterative Model Pruning through Dynamic Network Rewiring of DNNs
S Kundu, Y Fu, B Ye, PA Beerel, M Pedram
ACM Transactions on Embedded Computing Systems 21 (5), 1-24, 2022
P2M-DeTrack: Processing-in-Pixel-in-Memory for Energy-efficient and Real-Time Multi-Object Detection and Tracking
S Kundu, G Datta, Z Yin, J Mathai, Z Liu, Z Wang, M Tian, S Lu, ...
VLSI-SoC 2022, 2022
The system can't perform the operation now. Try again later.
Articles 1–20