**
Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent
**

**Speaker**: Tselil Schramm (Stanford University)
**Occasion**: The Multifaceted Complexity of Machine Learning**Date**:
April 15, 2021

**Abstract**:
In many high-dimensional statistics problems, we observe information-computation tradeoffs: given access to more data, statistical estimation and inference tasks require fewer computational resources. Though this phenomenon is ubiquitous, we lack rigorous evidence that it is inherent. In the current day, to prove that a statistical estimation task is computationally intractable, researchers must prove lower bounds against each type of algorithm, one by one, resulting in a “proliferation of lower bounds”. We scientists dream of a more general theory which explains computational intractability in an algorithm-independent way.A major challenge in the theory of deep learning is to understand the computational complexity of learning basic families of neural networks (NNs). In fact, learning even a single Rectified Linear Unit (ReLU), which is one of the most commonly used activation functions in deep learning, can be challenging. The challenge here arises from the non-convexity of the associated optimization problem.

In this talk, I will make one small step towards realizing this dream. I will demonstrate general conditions under which two popular frameworks yield the same information-computation tradeoffs for high-dimensional hypothesis testing: the first being statistical queries in the “statistical dimension” framework, and the second being hypothesis testing with low-degree hypothesis tests. Our equivalence theorems capture numerous well-studied high-dimensional learning problems: sparse PCA, tensor PCA, planted clique, and more.

Based on joint work with Matthew Brennan, Guy Bresler, Samuel B. Hopkins and Jerry Li.