skip to main content


Title: Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinitymore » and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
 [1] ;  [2]
  1. Univ. of Michigan, Ann Arbor, MI (United States)
  2. Stanford Univ., CA (United States)
Publication Date:
Grant/Contract Number:
NA0002534; FA9550-13-1-0043; W911NF-11-1-0391; W911NF-12-1-0443; 2P01CA087634-06A2; DMS-0906392; DMS-CMG1025465; AGS-1003823; DMS-1106642; DMS-CAREER-1352656; DARPA-YFAN66001-111-4131
Accepted Manuscript
Journal Name:
Proceedings of the IEEE
Additional Journal Information:
Journal Volume: 104; Journal Issue: 1; Journal ID: ISSN 0018-9219
Institute of Electrical and Electronics Engineers
Research Org:
Univ. of Michigan, Ann Arbor, MI (United States)
Sponsoring Org:
USDOE National Nuclear Security Administration (NNSA); National Institutes of Health (NIH); National Science Foundation (NSF); US Army Research Office (ARO)
Country of Publication:
United States
97 MATHEMATICS AND COMPUTING; Asymptotic regimes; big data; correlation estima-tion; correlation mining; correlation screening; correlation selection; graphical models; large-scale inference; purely high dimensional; sample complexity; triple asymptotic framework; unifying learning theory
OSTI Identifier: