AISM 54, 459-475
© 2002 ISM

Statistical asymptotic theory of active learning

Takafumi Kanamori

Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Ookayama 2-12-1, Meguro-ku, Tokyo 152-8552, Japan, e-mail: kanamori@is.titech.ac.jp

(Received August 26, 1998; revised January 19, 2001)

Abstract.    We study a parametric estimation problem. Our aim is to estimate or to identify the conditional probability which is called the system. We suppose that we can select appropriate inputs to the system when we gather the training data. This kind of estimation is called active learning in the context of the artificial neural networks. In this paper we suggest new active learning algorithms and evaluate the risk of the algorithms by using statistical asymptotic theory. The algorithms are regarded as a version of the experimental design with two-stage sampling. We verify the efficiency of the active learning by simple computer simulations.

Key words and phrases:    Active learning, Kullback-Leibler divergence, risk, optimal experimental design.

Source (TeX , DVI )