Liu Ziyin, UTokyo 刘子寅

Liu Ziyin 刘子寅

Email: liu.ziyin.p (at) gmail.com
Office: Room 949, Graduate School of Science Building 1
Department of Physics, University of Tokyo


I am a graduate student in physics at the University of Tokyo. I study and do research in, technically speaking, theoretical physics in the Ueda Group. Personally, I am interested in art, literature and philosophy. I also play Go. Even if I do my best, I speak poor English, awful Japanese, and even worse Shanghainese. If you have questions or want to collaborate, or just want to say hi, please shoot an email.

Doctor thesis: Symmetry breaking in deep learning (深層学習に於ける対称性の破れ, 2023).
Master thesis: Mean-field learning dynamics of deep neural networks (2020).

Research Interest

I am particularly interested building a scientific theory of deep learning, and I think tools and intuitions from physics an be of great help. Currently, I am interested in the following problems:

Past Research

So far, I have worked on the following problems: I do my best to establish solid deep learning methods:
  • achieving L1 penalty with gradient descent
  • a horse race inspired blackbox for uncertainty estimation in deep learning
  • a theoretically motivated data augmentation method for financial portfolio construction

  • I also do research on statistical physics:
  • a universal theormodynamic uncertainty relation
  • econophysics
  • Recent Preprints

    1. The Implicit Bias of Gradient Noise: A Symmetry Perspective
    2. Liu Ziyin, Mingze Wang, Lei Wu
      Preprint 2024
      [paper] [arXiv]
    3. When Does Feature Learning Happen? Perspective from an Analytically Solvable Model
    4. Yizhou Xu, Liu Ziyin
      Preprint 2024
      [paper] [arXiv]
    5. Symmetry Leads to Structured Constraint of Learning
    6. Liu Ziyin
      Preprint 2023
      [paper] [arXiv]
    7. Law of Balance and Stationary Distribution of Stochastic Gradient Descent
    8. Liu Ziyin*, Hongchao Li*, Masahito Ueda
      Preprint 2023
      [paper] [arXiv]
    9. Probabilistic Stability of Stochastic Gradient Descent
    10. Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
      Preprint 2023
      [paper] [arXiv]

    Publications

    1. Zeroth, first, and second-order phase transitions in deep neural networks
    2. Liu Ziyin, Masahito Ueda
      Physical Review Research 2023
      [paper] [arXiv]
    3. Exact Solutions of a Deep Linear Network
    4. Liu Ziyin, Botao Li, Xiangming Meng
      Journal of Statistical Mechanics: Theory and Experiment, 2023
      [paper] [arXiv]
    5. On the stepwise nature of self-supervised learning
    6. James B. Simon, Maksis Knutins, Liu Ziyin, Daniel Geisz, Abraham J. Fetterman, Joshua Albrecht
      ICML 2023
      [paper] [arXiv]
    7. Sparsity by Redundancy: Solving L1 with SGD
    8. Liu Ziyin*, Zihao Wang*
      ICML 2023
      [paper] [arXiv]
    9. What shapes the loss landscape of self-supervised learning?
    10. Liu Ziyin, Ekdeep Singh Lubana, Masahito Ueda, Hidenori Tanaka
      ICLR 2023
      [paper] [arXiv]
    11. Exact Solutions of a Deep Linear Network
    12. Liu Ziyin, Botao Li, Xiangming Meng
      NeurIPS 2022
      [paper] [arXiv]
    13. Posterior Collapse of a Linear Latent Variable Model
    14. Zihao Wang*, Liu Ziyin*
      NeurIPS 2022 (oral: 1% of all submissions)
      [paper] [arXiv]
    15. Universal Thermodynamic Uncertainty Relation in Non-Equilibrium Dynamics
    16. Liu Ziyin, Masahito Ueda
      Physical Review Research (2022)
      [paper] [arXiv]
    17. Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction
    18. Liu Ziyin, Kentaro Minami, Kentaro Imajo
      ICAIF 2022 (3rd ACM International Conference on AI in Finance)
      [paper] [arXiv]
    19. Power Laws and Symmetries in a Minimal Model of Financial Market Economy
    20. Liu Ziyin, Katsuya Ito, Kentaro Imajo, Kentaro Minami
      Physical Review Research (2022)
      [paper] [arXiv]
    21. Logarithmic landscape and power-law escape rate of SGD
    22. Takashi Mori, Liu Ziyin, Kangqiao Liu, Masahito Ueda
      ICML 2022
      [paper] [arXiv]
    23. SGD with a Constant Large Learning Rate Can Converge to Local Maxima
    24. Liu Ziyin, Botao Li, James B. Simon, Masahito Ueda
      ICLR 2022 (spotlight: 5% of all submissions)
      [paper] [arXiv]
    25. Strength of Minibatch Noise in SGD
    26. Liu Ziyin*, Kangqiao Liu*, Takashi Mori, Masahito Ueda
      ICLR 2022 (spotlight: 5% of all submissions)
      [paper] [arXiv]
    27. On the Distributional Properties of Adaptive Gradients
    28. Zhang Zhiyi*, Liu Ziyin*
      UAI 2021
      [paper] [arXiv]
    29. Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
    30. Kangqiao Liu*, Liu Ziyin*, Masahito Ueda
      ICML 2021
      [paper] [arXiv]
    31. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
    32. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACM Multimedia 2021
      NeurIPS 2020 Workshop on Meta Learning
      [arXiv] [code]
    33. Neural Networks Fail to Learn Periodic Functions and How to Fix It
    34. Liu Ziyin, Tilman Hartwig, Masahito Ueda
      NeurIPS 2020
      [paper] [arXiv]
    35. Deep Gamblers: Learning to Abstain with Portfolio Theory
    36. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
      NeuRIPS 2019
      [paper] [arXiv] [code]
    37. Think Locally, Act Globally: Federated Learning with Local and Global Representations
    38. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
      [paper] [arXiv] [code]
    39. Multimodal Language Analysis with Recurrent Multistage Fusion
    40. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
      EMNLP 2018 (oral presentation)
      [paper] [supp] [arXiv] [slides]

    Presentation and Invited Talks

    1. Collapse and Phase Transition in Deep Learning
    2. The Institute of Statistical Mathematics (統計数理研究所), Japan, March, 2023
    3. What shapes the loss landscape of Self-Supervised Learning?
    4. NTT Japan, Musashino R&D Center, February, 2023
    5. The Probabilistic Stability and Low-Rank Bias of SGD
    6. Math Machine Learning seminar, Max Planck Institute/UCLA, January 26, 2023
      [link]
    7. Two Low-Rank Mechanisms in Deep Learning
    8. Haim Sompolinsky Lab Seminar, Harvard University, January 16, 2023
    9. Collapse and Phase Transition in Deep Learning
    10. Tomaso Poggio Lab, MIT, November 7th, 2022
      [slides]
    11. Collapse and Phase Transition in Deep Learning
    12. NeuroTheory Seminar, Harvard University, October 11th, 2022
      [slides]
    13. Does the Refractory Period Help Learning? A Spiking Neural Network Perspective
    14. Harvard-LMU Young Scientists' Forum, 2022
    15. Stochastic Gradient Descent with Multiplicative Noise
    16. RIKEN AIP, 2021
      [link] [video]
    17. Careful Deep Learning: Learning to Abstain by Training on A Simple Loss Function
    18. IPI Seminar, 2019
      [ppt]
    19. A Full Potential Approach to the Solution of Core States
    20. American Physical Society March Conference, 2018
      [abstract] [ppt] [poster]

    I also engage in the following reviewing services: ICML, IJCAI, CVPR, ICCV, AISTATS, UAI, NeurIPS, ICLR, TMLR, IEEE-TSP, TPAMI, KDD, IEEE-TNNLS, JMLR, SIAM-SDM...


    This page has been accessed several times since July 07, 2018.