Liu Liu

photo 

Ph.D student
Member of the Wireless Networking and Communications Group
Department of Electrical and Computer Engineering
The University of Texas at Austin

About Me

I am a Ph.D student in the ECE department of The University of Texas at Austin. I am fortunately advised by Prof. Constantine Caramanis. My research interests are in theoretical machine learning, optimization and high dimensional statistics.

Before moving to Austin, I received the B.E. degree from Department of Electronic Engineering, Tsinghua University, Beijing, China, in 2016.


Publications and Preprints

Expand all abstracts   Collapse all abstracts

High Dimensional Robust Sparse Regression
Liu Liu, Yanyao Shen, Tianyang Li, Constantine Caramanis.
ArXiv preprint arXiv:1805.11643, 2018. [Arxiv Link] [Arxiv PDF] [Slides]

  • See abstract

    • We provide a novel–and to the best of our knowledge, the first–algorithm for high dimensional sparse regression with corruptions in explanatory and/or response variables. Our algorithm recovers the true sparse parameters in the presence of a constant fraction of arbitrary corruptions. Our main contribution is a robust variant of Iterative Hard Thresholding. Using this, we provide accurate estimators with sub-linear sample complexity. Our algorithm consists of a novel randomized outlier removal technique for robust sparse mean estimation that may be of interest in its own right: it is orderwise more efficient computationally than existing algorithms, and succeeds with high probability, thus making it suitable for general use in iterative algorithms. We demonstrate the effectiveness on large-scale sparse regression problems with arbitrary corruptions.


Statistical inference using SGD
Tianyang Li, Liu Liu, Anastasios Kyrillidis, Constantine Caramanis.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [Arxiv Link] [Arxiv PDF]

  • See abstract

    • We present a novel method for frequentist statistical inference in M-estimation problems, based on stochastic gradient descent (SGD) with a fixed step size: we demonstrate that the average of such SGD sequences can be used for statistical inference, after proper scaling. An intuitive analysis using the Ornstein-Uhlenbeck process suggests that such averages are asymptotically normal. From a practical perspective, our SGD-based inference procedure is a first order method, and is well-suited for large scale problems. To show its merits, we apply it to both synthetic and real datasets, and demonstrate that its accuracy is comparable to classical statistical methods, while requiring potentially far less computation.


Approximate Newton-based statistical inference using only stochastic gradients
Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis.
ArXiv preprint arXiv:1805.08920, 2018. [Arxiv Link] [Arxiv PDF]

  • See abstract

    • We present a novel inference framework for convex empirical risk minimization, using approximate stochastic Newton steps. The proposed algorithm is based on the notion of finite differences and allows the approximation of a Hessian-vector product from first-order information. In theory, our method efficiently computes the statistical error covariance in M-estimation, both for unregularized convex learning problems and high-dimensional LASSO regression, without using exact second order information, or resampling the entire data set. In practice, we demonstrate the effectiveness of our framework on large-scale machine learning problems, that go even beyond convexity: as a highlight, our work can be used to detect certain adversarial attacks on neural networks.



Contact