Xin Yang
Research Scientist

yangxin dot yx at bytedance dot com
ByteDance applied machine learning (AML) research team at Seattle
Google Scholar | Dblp


I am a research scientist in the ByteDance applied machine learning (AML) research team at Seattle. My research interest is in learning theory, deep learning and optimization. I am also interested in lower bounds and hardness results in computational complexity. Previously, I received Ph.D. in the Paul G. Allen School of Computer Science & Engineering at the University of Washington in 2020. I was fortunate to be advised by Professor Paul Beame and Professor Kevin Jamieson. Before that, I received my B.E. in computer science from IIIS (Yao Class) at Tsinghua University in 2014.


(*alphabetic author order)

Number Balancing is as hard as Minkowski′s Theorem and Shortest Vector, Rebecca Hoberg*, Harishchandra Ramadas*, Thomas Rothvoss*, Xin Yang*, IPCO 2017, Arxiv.

Canaries in the Network, Danyang Zhuo, Qiao Zhang, Xin Yang, Vincent Liu, HotNets 2016.

Time-Space Tradeoffs for Learning Finite Functions from Random Evaluations, with Applications to Polynomials, Paul Beame*, Shayan Oveis Gharan*, Xin Yang*, COLT 2018, Arxiv.

On the Bias of Reed-Muller Codes over Odd Prime Fields, Paul Beame*, Shayan Oveis Gharan*, Xin Yang*, SIAM Journal on Discrete Mathematics 34 (2), 1232-1247, Arxiv.

Near optimal algorithm for approximating John′s Ellipsoid, Michael B. Cohen*, Ben Cousins*, Yin Tat Lee*, Xin Yang*, COLT 2019, Arxiv.

Total Least Squares Regression in Input Sparsity Time, Huian Diao*, Zhao Song*, David Woodruff*, Xin Yang*, NeurIPS 2019, Arxiv.

Sketching Transformed Matrices with Applications to Natural Language Processing, Yingyu Liang*, Zhao Song*, Mengdi Wang*, Lin Yang*, Xin Yang*, AISTATS 2020, Arxiv.

Label Leakage and Protection in Two-party Split Learning, Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, Chong Wang, ICLR 2022. A preliminary version has appeared on NeurIPS-20 Workshop on Scalability, Privacy, and Security in Federated Learning, Arxiv.

FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis., Baihe Huang*, Xiaoxiao Li*, Zhao Song*, Xin Yang*, ICML 2021, Arxiv.

Differentially Private Multi-Party Data Release for Linear Regression, Ruihan Wu, Xin Yang, Yuanshun Yao, Jiankai Sun, Tianyi Liu, Kilian Q Weinberger, Chong Wang, UAI 2022, Arxiv.