Shiwei Liu
Hi, I am a postdoctoral fellow in the VITA group and the Institute for Foundations of Machine Learning (IFML) at UT Austin, under the supervision of Atlas Wang. I obtained my Ph.D. at the Eindhoven University of Technology (TU/e), the Netherlands, under the supervision of Mykola Pechenizkiy and Decebal Constantin Mocanu.
I'm on the job market and looking for full-time research positions.My CV is here.
Research Interests
Machine Learning, Deep Learning, Sparsity, Large Language Models.
News
7/2023, one paper Flatness-aware Continual Learning got accepted at ICCV 2023.
6/2023, I will give an invited talk at PLDI 2023 - CTSTA 2023 workshop on the 18th of June in Orlando, USA.
6/2023, two papers got accepted at ECMLPKDD 2023: Robust overfitting and Debiased Sparse Training.
5/2023, the work I conducted during my internship at JD Academy has been accepted by International Journal of Computer Vision (IJCV) - STU-GAN.
4/2023, 3 papers got accepted in ICML 2023, Instant Soup (Oral), Large Kernel Distillation, and Graph Ladling.
3/2023, our SNN Ten Lessons and Channel-Level DST paper has been accepted as spotlight presentations at the SNN workshop.
2/2023, our Feature Selection with Neuron Evolution paper has been accepted in Transactions on Machine Learning Research (TMLR).
1/2023, 4 papers got accepted in ICLR 2023, Ramanujan Graph Pruning (oral top-5%), Sparsity May Cry Benchmark (spotlight top-25%), MoE as Dropout (spotlight top-25%), SLaK:51x51 Large Conv. Looking forward to meeting and having fun in Rwanda.
12/2022, our Untrained GNNs paper received the Best Paper Award from LoG 2022.
11/2022, our Lottery-Pools paper got accepted in AAAI 2023.
9/2022, our Sparse large kernel on time series paper got accepted in NeurIPs 2022.
25/8/2022, I have moved to Austin, Texas, USA as postdoctoral fellow in the VITA group and the Institute for Foundations of Machine Learning (IFML) at UT Austin, under the supervision of Atlas Wang.
7/2022, our paper, Brain-inspired Highly Sparse NN got accepted for publication in Machine Learning Journal.
5/2022, our Sup-Tickets got accepted by UAI 2022.
4/2022, our tutorial Sparse Neural Networks Training has been accepted at ECMLPKDD 2022.
6/4/2022, I got my PhD with cum laude (distinguished thesis).
3/2022, I got my PhD thesis abstract accpeted by IDA 2022, which was also the first conference (symposium) that I have attended in the first year of my PhD. PhD life is a cycle :).
2/2022, I am honored to receive the postdoctoral fellowship at IFML of The University of Texas at Austin.
1/2022, (2/3) two of my first-author papers are accepted by ICLR 2022: Random pruning and FreeTickets.
12/2021, I receive the “outstanding intern” honour in JD Acedemy Explore.
9/2021, (1/1) one of my first-author paper gets accepted by NeurIPs 2021: GraNet.
6/2021, I moved to Beijing, China for my internship at JD Acedemy Explore, under supervision of Li Shen and Dacheng Tao.
5/2021, (2/2) two of my first-author papers are accepted by ICML 2021: In-Time Over-Parameterization and Selfish RNN.
Selected Publications
[1] Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu. “You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets.” Learning on Graphs Conference (LoG), 2022. Best Paper Award. [Paper].
[2] Duc N.M Hoang, Shiwei Liu, Radu Marculescu, Zhangyang Wang. “Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph.” International Conference on Learning Representations (ICLR), 2023. Notable-Top-5% Oral. [Paper].
[3] Shiwei Liu*, Tianlong Chen*, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, and Zhangyang Wang. “Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!” International Conference on Learning Representations (ICLR), 2023. Notable-Top-25% Spotlight. [Paper].
[4] Shiwei Liu, Tianlong Chen*, Xiaohan Chen*, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Mocanu, and Zhangyang Wang. “More Convnets in the 2020s: Scaling Up Kernels Beyond 51x51 Using Sparsity.” International Conference on Learning Representations (ICLR), 2023. [Paper].
[5] Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy. “Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training”. The Thirty-Eighth International Conference on Machine Learning (ICML), PMLR, 2021. [Paper].
Honors and Awards
🏆 Best Paper Award, Learning on Graphs Conference (LoG 2022)
🏆 Cum Laude (distinguished Ph.D. thesis), Eindhoven University of Technology (TU/e)
🏆 Outstanding Intern, JD Explore Academy
🏆 IFML Postdoctoral Fellowship, UT Austin, US
🏆 Outstanding Graduate, North University of China, China
Career
Period | Degree | Venue |
---|---|---|
Aug. 2022 - Present | Postdoctoral Fellow | The University of Texas at Austin |
Jun. 2021 - Nov. 2021 | Research Intern | JD Acedemy Explore |
Mar. 2018 - Mar. 2022 | Ph.D. | Eindhoven University of Technology |
Sep. 2015 - Jul. 2022 | Master | Harbin Institute of Technology |