This was part of The Multifaceted Complexity of Machine Learning
Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent
Tselil Schramm, Stanford University
Thursday, April 15, 2021
Abstract: In many high-dimensional statistics problems, we observe information-computation tradeoffs: given access to more data, statistical estimation and inference tasks require fewer computational resources. Though this phenomenon is ubiquitous, we lack rigorous evidence that it is inherent. In the current day, to prove that a statistical estimation task is computationally intractable, researchers must prove lower bounds against each type of algorithm, one by one, resulting in a “proliferation of lower bounds”. We scientists dream of a more general theory which explains computational intractability in an algorithm-independent way. In this talk, I will make one small step towards realizing this dream. I will demonstrate general conditions under which two popular frameworks yield the same information-computation tradeoffs for high-dimensional hypothesis testing: the first being statistical queries in the “statistical dimension” framework, and the second being hypothesis testing with low-degree hypothesis tests. Our equivalence theorems capture numerous well-studied high-dimensional learning problems: sparse PCA, tensor PCA, planted clique, and more.