
Short bio [CV]
Jinchi Lv is McAlister Associate Professor in Business Administration in Data Sciences and Operations Department of the Marshall School of Business at the University of Southern California, and an Associate Fellow of USC Dornsife Institute for New Economic Thinking (INET). He received his Ph.D. in Mathematics from Princeton University in 2007. His research interests include highdimensional statistics, big data problems, statistical machine learning, neuroscience and business applications, and financial econometrics.
His papers have been published in journals in statistics, economics, information theory, biology, and computer science, and one of them was published as a Discussion Paper in Journal of the Royal Statistical Society Series B (2008). He serves as an associate editor of the Annals of Statistics (2013present) and Statistica Sinica (2008present). He is the recipient of the Royal Statistical Society Guy Medal in Bronze (2015), NSF Faculty Early Career Development (CAREER) Award (2010), USC Marshall Dean's Award for Research Excellence (2009), and Zumberge Individual Award from USC's James H. Zumberge Faculty Research and Innovation Fund (2008).


Representative Publications
 Candès, E. J., Fan, Y., Janson, L. and Lv, J. (2016). Panning for gold: Modelfree knockoffs for highdimensional controlled variable selection. Manuscript. [PDF]
[Finding the key causal factors in largescale applications is much beyond the task of prediction. Quantifying the variability, reliability, and reproducibility of a set of discovered factors is central to enabling valid and credible scientific discoveries and investigations. How can we design a variable selection procedure for highdimensional nonlinear models with statistical guarantees that the fraction of false discoveries can be controlled? This paper provides some surprising insights into this open question.]
 Ren, Z., Kang, Y., Fan, Y. and Lv, J. (2016). Tuningfree heterogeneity pursuit in massive networks. Manuscript. [PDF]
[Heterogeneity is a major feature of largescale data sets in the big data era, powering meaningful scientific discoveries through the understanding of important differences among subpopulations of interest. How can we uncover the heterogeneity among a large collection of networks in a tuningfree yet statistically optimal fashion? This paper provides some surprising insights into this question.]
 Fan, Y., Kong, Y., Li, D. and Lv, J. (2016). Interaction pursuit with feature screening and selection. Manuscript. [PDF]
[Understanding how features interact with each other is of paramount importance in many scientific discoveries and contemporary applications. To discover important interactions among features in high dimensions, it has been a convention to resort to some structural constraints such as the heredity assumption. Yet some key causal factors can become active only when acting jointly, but not so when acting alone. How can we go beyond such structural assumptions for better flexibility in real applications? This paper provides some surprising insights into this question.]
 Chen, K., Uematsu, Y., Lin, W., Fan, Y. and Lv, J. (2016). Sparse orthogonal factor regression. Manuscript. [PDF]
[How are memory states with different time constants encoded in different brain regions? How can we determine the number of key memory components? Understanding the meaningful associations among a large number of responses and predictors is key to many such contemporary scientific studies and investigations. This paper provides a unified framework that enables us to probe the largescale responsepredictor association networks through different layers of latent factors with interpretability and orthogonality.]
 Fan, Y. and Lv, J. (2016). Innovated scalable efficient estimation in ultralarge Gaussian graphical models. The Annals of Statistics 44, 20982126. [PDF]
[Large precision matrix estimation has long been perceived fundamentally different from large covariance matrix estimation. What if we can innovate the data matrix and convert the former into the latter? This paper provides a surprisingly simple procedure for such a purpose that comes with extreme scalability and statistical guarantees.]
 Lv, J. and Liu, J. S. (2014). Model selection principles in misspecified models. Journal of the Royal Statistical Society Series B 76, 141–167. [PDF]
[There has been a long debate on whether AIC or BIC principles may dominate one another for model selection (in correctly specified models). It is wellknown that “all models are wrong, but some are more useful than others.” How does model misspecification impact model selection? This paper provides some surprising insights into this fundamental question and unveils that both principles can work in harmony through a Bayesian view.]
 Fan, Y. and Lv, J. (2013). Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108, 1044–1061. [PDF]
[There has been a long debate on whether convex or nonconvex regularization methods may dominate one another. What if both classes of methods can be close to each other when viewed from a new angle? This paper unveils some surprising insights of a smallworld phenomenon into this question.]
 Lv, J. (2013). Impacts of high dimensionality in finite samples. The Annals of Statistics 41, 2236–2262. [PDF]
[What are the fundamental impacts of high dimensionality in finite samples? This paper provides both a probabilistic view and a nonprobabilistic (geometric) view that are in harmony.]
 Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space (with discussion). Journal of the Royal Statistical Society Series B 70, 849–911. [PDF]
[Independence learning has long been applied widely and routinely when the scale of the data set becomes excessively large. What are the theoretical foundations for such a class of methods? This paper provides some surprising insights into this question and initiates a new line of statistical thinking on largescale statistical learning.]
 Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. Journal of Econometrics 147, 186–197. [PDF]
[The simplest framework of lowrank plus sparse structure on the covariance matrix is induced by the use of a factor model. What are the fundamental differences between large covariance matrix estimation and large precision matrix estimation in such a context? This paper provides some surprising insights into this question.]
