Interpolation and Regularization for Causal Learning

We study the problem of learning causal models from observational data through the lens of interpolation and its counterpart---regularization. A large volume of recent theoretical, as well as empirical work, suggests that, in highly complex model …

Revisiting "No free lunch".

Classical learning theory suggests that choosing a hypothesis class that is neither too complex nor too simple would lead to optimal expected (generalization) risk. This is typically demonstrated by a U-shaped curve between the expected risk of the estimator and the complexity of the function class (Figure a). Recent work has demonstrated that under certain additional assumptions (such as a minimum norm condition), extending the curve beyond the interpolating regime could lead to learning estimators with lower expected risk.