AISM 52, 630-645

## Laws of iterated logarithm and related asymptotics for estimators of
conditional density and mode

### K.L. Mehra^{1}, Y.S. Ramakrishnaiah^{2} and P. Sashikala^{3}

^{1}Department of Mathematical Sciences,
University of Alberta, Edmonton, Canada T6G 2G1

^{2}Department of Statistics, Osmania University,
Hyderabad-500007, India

^{3}Department of Statistics, B.B.C.I.T.
Kachiguda, Hyderabad-500027, India

(Received September 29, 1997; revised February 4, 1999)

Abstract.
Let $(X_i,Y_i)$ be a sequence of i.i.d. random
vectors in $R^{(2)}$ with an absolutely continuous distribution
function $H$ and let $g_x(y)$, $y\in\R^{(1)}$ denote the conditional
density of $Y$ given $X=x\in\Lambda(F)$, the support of $F$,
assuming that it exists. Also let $M(x)$ be the (unique) conditional
mode of $Y$ given $X=x$ defined by $M(x)= \arg\max_y(g_x(y))$. In
this paper new classes of smoothed rank nearest neighbor (RNN)
estimators of $g_x(y)$, its derivatives and $M(x)$ are proposed and
the laws of iterated logarithm (pointwise), uniform a.s. convergence
over $-\infty<y<\infty$ and $x\in$ a compact $C\subseteq\Lambda(F)$
and the asymptotic normality for the proposed estimators are
established. Our results and proofs also cover the Nadayara-Watson
(NW) case. It is shown using the concept of the relative efficiency
that the proposed RNN estimator is superior (asymtpotically) to the
corresponding NW type estimator of $M(x)$, considered earlier in literature.

Key words and phrases:
Conditional density, conditional mode, smooth rank nearest neighbor estimators, law of iterated logarithm, uniform strong convergence.

**Source**
( TeX , DVI )