Office: Room 949, Graduate School of Science Building 1

Department of Physics, University of Tokyo

I am a graduate student in physics at the University of Tokyo. I study and do research in, technically speaking, theoretical physics in the Ueda Group. Personally, I am interested in art, literature and philosophy. I also play Go. Even if I do my best, I speak poor English, awful Japanese, and even worse Shanghainese. If you have questions or want to collaborate, or just want to say hi, please shoot an email.

Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).

Master thesis: Mean-field learning dynamics of deep neural networks (2020).

- phase transition and symmetry breaking in neural network
- analytically solvable models of deep learning
- design of principled and simple algorithms of learning

- loss landscape of neural networks (asymptotics, landscape, etc.):

- symmetry and loss landscape of self-supervised learning
- posterior collapse in Bayesian deep learning
- landscape of deep linear nets
- prediction variance of stochastic neural networks
- interpolation and extrapolation properties of neural networks
- phase transitions in deep learning
- dynamics of stochastic gradient descent:

- understand different regimes of learning with the probabilistic stability
- fluctuation of stochastic gradient descent
- the nature of the minibatch noise
- power-law landscape and escaping from local minimum
- escaping from a saddle point

I also do research on statistical physics:

**The Implicit Bias of Gradient Noise: A Symmetry Perspective**
Liu Ziyin, Mingze Wang, Lei Wu**When Does Feature Learning Happen? Perspective from an Analytically Solvable Model**
Yizhou Xu, Liu Ziyin**Symmetry Leads to Structured Constraint of Learning**
Liu Ziyin**Law of Balance and Stationary Distribution of Stochastic Gradient Descent**
Liu Ziyin*, Hongchao Li*, Masahito Ueda**Probabilistic Stability of Stochastic Gradient Descent**
Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda

Preprint 2024

[paper] [arXiv]

Preprint 2024

[paper] [arXiv]

Preprint 2023

[paper] [arXiv]

Preprint 2023

[paper] [arXiv]

Preprint 2023

[paper] [arXiv]

**Zeroth, first, and second-order phase transitions in deep neural networks**
Liu Ziyin, Masahito Ueda**Exact Solutions of a Deep Linear Network**
Liu Ziyin, Botao Li, Xiangming Meng**On the stepwise nature of self-supervised learning**
James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht**Sparsity by Redundancy: Solving L1 with SGD**
Liu Ziyin*, Zihao Wang***What shapes the loss landscape of self-supervised learning?**
Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka**Exact Solutions of a Deep Linear Network**
Liu Ziyin, Botao Li, Xiangming Meng**Posterior Collapse of a Linear Latent Variable Model**
Zihao Wang*, Liu Ziyin***Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics**
Liu Ziyin, Masahito Ueda**Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction**
Liu Ziyin, Kentaro Minami, Kentaro Imajo**Power Laws and Symmetries in a Minimal Model of Financial Market Economy**
Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami**Logarithmic landscape and power-law escape rate of SGD**
Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda**SGD with a Constant Large Learning Rate Can Converge to Local Maxima**
Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda**Strength of Minibatch Noise in SGD**
Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda**On the Distributional Properties of Adaptive Gradients**
Zhang Zhiyi*, Liu Ziyin***Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent**
Kangqiao Liu*, Liu Ziyin*, Masahito Ueda**Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment**
Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov**Neural Networks Fail to Learn Periodic Functions and How to Fix It**
Liu Ziyin, Tilman Hartwig, Masahito Ueda**Deep Gamblers: Learning to Abstain with Portfolio Theory**
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda**Think Locally, Act Globally: Federated Learning with Local and Global Representations**
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency**Multimodal Language Analysis with Recurrent Multistage Fusion**
Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency

Physical Review Research 2023

[paper] [arXiv]

Journal of Statistical Mechanics: Theory and Experiment, 2023

[paper] [arXiv]

ICML 2023

[paper] [arXiv]

ICML 2023

[paper] [arXiv]

ICLR 2023

[paper] [arXiv]

NeurIPS 2022

[paper] [arXiv]

NeurIPS 2022 (oral: 1% of all submissions)

[paper] [arXiv]

Physical Review Research (2022)

[paper] [arXiv]

ICAIF 2022 (3rd ACM International Conference on AI in Finance)

[paper] [arXiv]

Physical Review Research (2022)

[paper] [arXiv]

ICML 2022

[paper] [arXiv]

ICLR 2022 (spotlight: 5% of all submissions)

[paper] [arXiv]

ICLR 2022 (spotlight: 5% of all submissions)

[paper] [arXiv]

UAI 2021

[paper] [arXiv]

ICML 2021

[paper] [arXiv]

ACM Multimedia 2021

NeurIPS 2020 Workshop on Meta Learning

[arXiv] [code]

NeurIPS 2020

[paper] [arXiv]

NeuRIPS 2019

[paper] [arXiv] [code]

NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)

[paper] [arXiv] [code]

EMNLP 2018 (oral presentation)

[paper] [supp] [arXiv] [slides]

**Collapse and Phase Transition in Deep Learning**
The Institute of Statistical Mathematics (統計数理研究所), Japan, March, 2023**What shapes the loss landscape of Self-Supervised Learning?**
NTT Japan, Musashino R&D Center, February, 2023**The Probabilistic Stability and Low-Rank Bias of SGD**
Math Machine Learning seminar, Max Planck Institute/UCLA, January 26, 2023**Two Low-Rank Mechanisms in Deep Learning**
Haim Sompolinsky Lab Seminar, Harvard University, January 16, 2023**Collapse and Phase Transition in Deep Learning**
Tomaso Poggio Lab, MIT, November 7th, 2022**Collapse and Phase Transition in Deep Learning**
NeuroTheory Seminar, Harvard University, October 11th, 2022**Does the Refractory Period Help Learning? A Spiking Neural Network Perspective**
Harvard-LMU Young Scientists' Forum, 2022**Stochastic Gradient Descent with Multiplicative Noise**
RIKEN AIP, 2021**Careful Deep Learning: Learning to Abstain by Training on A Simple Loss Function**
IPI Seminar, 2019**A Full Potential Approach to the Solution of Core States**
American Physical Society March Conference, 2018

[link]

[slides]

[slides]

[link] [video]

[ppt]

[abstract] [ppt] [poster]

I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR, SIAM-SDM...

This page has been accessed times since July 07, 2018.