(Received April 30, 1991; revised February 27, 1992)
Abstract. A new estimator of a regression function is introduced via minimizing the L1-distance between some empirical function and its theoretical counterpart plus penalty for the roughness. The L1-risk of the estimator is bounded from above for every sample size no matter what the dependence structure of the observed random variables is. In the case of independent errors of measurement with a common variance the estimator is shown to achieve the optimal L1-rate of convergence within the class of m-times differentiable functions with bounded derivatives.
Key words and phrases: Nonlinear regression, minimum distance estimation, rates of convergence.
Source ( TeX , DVI , PS )