Liu Ziyin, UTokyo 刘子寅
Update: Okinawa is a great place.
Liu Ziyin 刘子寅
Email: liu.ziyin.p (at) gmail.com
Office: Room 949, Graduate School of Science Building 1
Department of Physics, University of Tokyo
I am a graduate student in physics at the University of Tokyo. I study and do research in, technically speaking, theoretical physics in the Ueda Group. Personally, I am interested in art, literature and philosophy. I also play Go. Even if I do my best, I speak poor English, awful Japanese, and even worse Shanghainese. If you have questions or want to collaborate, or just want to say hi, please shoot an email.
Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).
Research Interest
I am particularly interested building a scientific theory of deep learning, and I think tools and intuitions from physics an be of great help. Currently, I am interested in the following problems:
- phase transition and symmetry breaking in neural network
- analytically solvable models of deep learning
- design of principled and simple algorithms of learning
Past Research
So far, I have worked on the following problems:
I do my best to establish solid deep learning methods:
achieving L1 penalty with gradient descent
a horse race inspired blackbox for uncertainty estimation in deep learning
a theoretically motivated data augmentation method for financial portfolio construction
I also do research on statistical physics:
a universal theormodynamic uncertainty relation
econophysics
Recent Preprints
- Law of Balance and Stationary Distribution of Stochastic Gradient Descent
Liu Ziyin*, Hongchao Li*, Masahito Ueda
Preprint 2023
[paper] [arXiv]
- Probabilistic Stability of Stochastic Gradient Descent
Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
Preprint 2023
[paper] [arXiv]
- Exact Phase Transitions in Deep Learning
Liu Ziyin, Masahito Ueda
Preprint 2022
[paper] [arXiv]
Publications
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
Journal of Statistical Mechanics: Theory and Experiment, 2023
[paper] [arXiv]
- On the stepwise nature of self-supervised learning
James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
ICML 2023
[paper] [arXiv]
- Sparsity by Redundancy: Solving L1 with SGD
Liu Ziyin*, Zihao Wang*
ICML 2023
[paper] [arXiv]
- What shapes the loss landscape of self-supervised learning?
Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
ICLR 2023
[paper] [arXiv]
- Exact Solutions of a Deep Linear Network
Liu Ziyin, Botao Li, Xiangming Meng
NeurIPS 2022
[paper] [arXiv]
- Posterior Collapse of a Linear Latent Variable Model
Zihao Wang*, Liu Ziyin*
NeurIPS 2022 (oral: 1% of all submissions)
[paper] [arXiv]
- Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
Liu Ziyin, Masahito Ueda
Physical Review Research (2022)
[paper] [arXiv]
- Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
Liu Ziyin, Kentaro Minami, Kentaro Imajo
ICAIF 2022 (3rd ACM International Conference on AI in Finance)
[paper] [arXiv]
- Power Laws and Symmetries in a Minimal Model of Financial Market Economy
Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
Physical Review Research (2022)
[paper] [arXiv]
- Logarithmic landscape and power-law escape rate of SGD
Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
ICML 2022
[paper] [arXiv]
- SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- Strength of Minibatch Noise in SGD
Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
ICLR 2022 (spotlight: 5% of all submissions)
[paper] [arXiv]
- On the Distributional Properties of Adaptive Gradients
Zhang Zhiyi*, Liu Ziyin*
UAI 2021
[paper] [arXiv]
- Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
ICML 2021
[paper] [arXiv]
- Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
ACM Multimedia 2021
NeurIPS 2020 Workshop on Meta Learning
[arXiv] [code]
- Neural Networks Fail to Learn Periodic Functions and How to Fix It
Liu Ziyin, Tilman Hartwig, Masahito Ueda
NeurIPS 2020
[paper] [arXiv]
- Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
NeuRIPS 2019
[paper] [arXiv] [code]
- Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
[paper] [arXiv] [code]
- Multimodal Language Analysis with Recurrent Multistage Fusion
Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
EMNLP 2018 (oral presentation)
[paper] [supp] [arXiv] [slides]
Presentation and Invited Talks
- Collapse and Phase Transition in Deep Learning
The Institute of Statistical Mathematics (統計数理研究所), Japan, March, 2023
- What shapes the loss landscape of Self-Supervised Learning?
NTT Japan, Musashino R&D Center, February, 2023
- The Probabilistic Stability and Low-Rank Bias of SGD
Math Machine Learning seminar, Max Planck Institute/UCLA, January 26, 2023
[link]
- Two Low-Rank Mechanisms in Deep Learning
Haim Sompolinsky Lab Seminar, Harvard University, January 16, 2023
- Collapse and Phase Transition in Deep Learning
Tomaso Poggio Lab, MIT, November 7th, 2022
[slides]
- Collapse and Phase Transition in Deep Learning
NeuroTheory Seminar, Harvard University, October 11th, 2022
[slides]
- Does the Refractory Period Help Learning? A Spiking Neural Network Perspective
Harvard-LMU Young Scientists' Forum, 2022
- Stochastic Gradient Descent with Multiplicative Noise
RIKEN AIP, 2021
[link] [video]
- Careful Deep Learning: Learning to Abstain by Training on A Simple Loss Function
IPI Seminar, 2019
[ppt]
- A Full Potential Approach to the Solution of Core States
American Physical Society March Conference, 2018
[abstract] [ppt] [poster]
I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR...
This page has been accessed
times since July 07, 2018.