Loading Events
  • This event has passed.

Gilles BLANCHARD (Université Paris Sud) – "Is adaptive early stopping possible in statistical inverse problems?"

February 10, 2020 @ 2:00 pm - 3:15 pm
The Statistical Seminar: Every Monday at 2:00 pm.
Time: 2:00 pm – 3:15 pm
Date: 10th of February 2020
Place: Room 3001.
Gilles BLANCHARD (Université Paris Sud)- “Is adaptive early stopping possible in statistical inverse problems? “


Consider a statistical inverse problem where different estimators $\hat{f}_1, \ldots , \hat{f}_K$ are available (ranked in order of increasing “complexity”, here identified with variance) and the classical problem of estimator selection. For a (data-dependent) choice of the estimator index $\hat{k}$, there exist a number of well-known methods achieving oracle adaptivity in a wide range of contexts, for instance penalization methods, or Lepski’s method.
However, they have in common that the estimators for {\em all} possible values of $k$ have to be computed first, and only then compared to each other in some way to determine the final choice. Contrast this to an “early stopping” approach where we are able to compute iteratively the estimators for $k= 1, 2, \ldots $ and have to decide to stop at some point without being allowed to compute the following estimators. Is oracle adaptivity possible then? This question is motivated by settings where computing estimators for larger $k$ requires more computational cost; furthermore, some form of early stopping is most often used in practice.
We propose a precise mathematical formulation of this question — in the idealized framework of a Gaussian sequence model with $D$ observed noisy coefficients. In this model, we provide upper and lower bounds on what is achievable using linear regularized filter estimators commonly used for statistical inverse problems.
Joint work with M. Hoffmann and M. Reiss.