Shiwei Liu

Hi, I am a Royal Society Newton International Fellow at University of Oxford. Previously, I was a postdoctoral fellow in the VITA group funded by IFML at UT Austin, working with Atlas Wang. I obtained my Ph.D. at the Eindhoven University of Technology (TU/e), the Netherlands, under the supervision of Mykola Pechenizkiy and Decebal Constantin Mocanu. My CV is here.

Research Interests

Machine Learning, Deep Learning, Sparsity, Large Language Models.

News

4/2024, Tutorial: I will give a tutorial on “Sparsity in Large Language Models: The New Odyssey.” at ICASSP 2024 (14 April 2024). Slide is shared here.

2/2024, Quantized-Sparse KV Cache got accepted by MLSys 2024.

1/2024, I am highly honored to receive the Rising Star in AI from KAUST and will give a talk at Rising Stars in AI Symposium.

1/2024, 3 papers got accepted by ICLR 2024: Training-Free Sparse LLM Fine-tuning, Multi-Task Vector Merging, Sparse Training with Neuron Revitalization.

11/2023, Block Sparse Training accepted by CPAL.

10/2023, I am highly grateful to receive the Best PhD Dissertation Runner-up Award from the Informatics Europe.

10/2023, I am highly honered to receive the Rising Star Award from CPAL and will give a presentation at HKU in Jan 2024.

10/2023, our tutorial “Sparsity in Large Language Models: The New Odyssey.” has been accepted by ICASSP 2024. See you in Seoul, Korea.

9/2023, 4 papers got accepted by NeurIPS 2023: Channel-Level DST, Essential Sparsity, Pruning Topology, Note-Path Balance.

9/2023, I am grateful to receive the prestigious Newton International Fellowship from the British Academy and the Royal Society.

7/2023, one paper Flatness-aware Continual Learning got accepted at ICCV 2023.

6/2023, I will give an invited talk at PLDI 2023 - CTSTA 2023 workshop on the 18th of June in Orlando, USA.

6/2023, 2 papers got accepted at ECMLPKDD 2023: Robust overfitting and Debiased Sparse Training.

5/2023, the work I conducted during my internship at JD Academy has been accepted by International Journal of Computer Vision (IJCV) - STU-GAN.

4/2023, 3 papers got accepted in ICML 2023, Instant Soup (Oral), Large Kernel Distillation, and Graph Ladling.

3/2023, our SNN Ten Lessons and Channel-Level DST paper has been accepted as spotlight presentations at the SNN workshop.

2/2023, our Feature Selection with Neuron Evolution paper has been accepted in Transactions on Machine Learning Research (TMLR).

1/2023, 4 papers got accepted in ICLR 2023, Ramanujan Graph Pruning (oral top-5%), Sparsity May Cry Benchmark (spotlight top-25%), MoE as Dropout (spotlight top-25%), SLaK:51x51 Large Conv. Looking forward to meeting and having fun in Rwanda.

12/2022, our Untrained GNNs paper received the Best Paper Award from LoG 2022.

11/2022, our Lottery-Pools paper got accepted in AAAI 2023.

9/2022, our Sparse large kernel on time series paper got accepted in NeurIPs 2022.

25/8/2022, I have moved to Austin, Texas, USA as postdoctoral fellow in the VITA group and the Institute for Foundations of Machine Learning (IFML) at UT Austin, under the supervision of Atlas Wang.

7/2022, our paper, Brain-inspired Highly Sparse NN got accepted for publication in Machine Learning Journal.

5/2022, our Sup-Tickets got accepted by UAI 2022.

4/2022, our tutorial Sparse Neural Networks Training has been accepted at ECMLPKDD 2022.

6/4/2022, I got my PhD with cum laude (distinguished thesis).

3/2022, I got my PhD thesis abstract accpeted by IDA 2022, which was also the first conference (symposium) that I have attended in the first year of my PhD. PhD life is a cycle :).

2/2022, I am honored to receive the postdoctoral fellowship at IFML of The University of Texas at Austin.

1/2022, (2/3) two of my first-author papers are accepted by ICLR 2022: Random pruning and FreeTickets.

12/2021, I receive the “outstanding intern” honour in JD Acedemy Explore.

9/2021, (1/1) one of my first-author paper gets accepted by NeurIPs 2021: GraNet.

6/2021, I moved to Beijing, China for my internship at JD Acedemy Explore, under supervision of Li Shen and Dacheng Tao.

5/2021, (2/2) two of my first-author papers are accepted by ICML 2021: In-Time Over-Parameterization and Selfish RNN.

Selected Publications

[1] Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy. “Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training”. The Thirty-Eighth International Conference on Machine Learning (ICML), PMLR, 2021. [Paper].

[2] Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, Shiwei Liu. “Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity.” arXiv preprint arXiv:2310.05175.

[3] Shiwei Liu, Tianlong Chen*, Xiaohan Chen*, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Mocanu, and Zhangyang Wang. “More Convnets in the 2020s: Scaling Up Kernels Beyond 51x51 Using Sparsity.” International Conference on Learning Representations (ICLR), 2023. [Paper].

[4] Shiwei Liu*, Tianlong Chen*, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, and Zhangyang Wang. “Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!” International Conference on Learning Representations (ICLR), 2023. Notable-Top-25% Spotlight. [Paper].

[5] Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu. “You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets.” Learning on Graphs Conference (LoG), 2022. Best Paper Award. [Paper].

Honors and Awards

🏆 Rising Star Award, Conference on Parsimony and Learning (CPAL)

🏆 Best Ph.D. Dissertation Runner-up Award, Informatics Europe 2023

🏆 Best Paper Award, Learning on Graphs Conference (LoG 2022)

🏆 Cum Laude (distinguished Ph.D. thesis), Eindhoven University of Technology (TU/e)

🏆 Outstanding Intern, JD Explore Academy

Career

PeriodDegreeVenue
Jan. 2024 - PresentNewton International FellowThe University of Oxford
Aug. 2022 - Oct. 2023Postdoctoral FellowThe University of Texas at Austin
Mar. 2018 - Mar. 2022Ph.D.Eindhoven University of Technology
Sep. 2015 - Jul. 2022MasterHarbin Institute of Technology