Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 5-24(2004)

Database Available to Researches on Natural Disaster Reduction

Takeo Kinosita
(Suimon Kankyo)

In order to estimate the extreme situation in planning for natural disaster reduction, the quality of original data must be absolutely assured. This study was carried out to determine what must be considered to improve the quality of data.

Data are classified into two types: numerical and descriptive. The period is 130 years at most for the former, and 1300 years for the latter. The compilations of descriptive data are introduced. Solid legal procedures are required for everlasting observation of numerical data for extreme statistics. Laws are cited in this report to explain the legal observation systems in Japan. The Japan Meteorological Agency and the River Bureau of the Ministry of Land, Infrastructure and Transport are two big organizations in Japan concerned with observation of hazardous events for disaster reduction. Their activities and their main data bases are explained. Field works to acquire data are basically tedious and elaborate. Moreover, it is very hard to observe hazardous events near the extreme situation for observation. Some mistakes are included in the data bases due to difficult conditions. The author is carrying out field surveys to improve the observation system all over Japan to eliminate such mistakes. He has already developed computer software to remove erroneous data in the data base. But his efforts are only from the data provider's side. More comments are invited from the users' side. As a result, he concludes that the most essential issue is a close linkage between users and data providers to promote natural disaster reduction.

Key words: Natural disaster, data quality assurance, observation, precipitation, water level, discharge.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 25-43(2004)

Recent Topics on the Extreme Theory Based on Weakly Dependent Data

Ken-ichi Yoshihara
(Soka University)

From both the theoretical and applied view points, the extremal theory contains many very important problems in various fields. Therefore, the theory has been studied by many authors and is discussed extensively now. Almost all results are based on independent sample, but to apply the results to real problems, the assumption that the sample is independent is too restrictive and in some cases the results obtained can not be used.

In this paper, we survey some recent results on the extreme theory based on weakly dependent data and examine “whether the results under independence remain true or not”, “whether we can use the results under independence, if modified” and “whether we can obtain new type results”.

Key words: Weak dependence, extremal index, Hill estimator, stationary sequence.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 45-62(2004)

Trimmed Sums

Makoto Maejima
(Keio University)

Let {Xj } be i.i.d. random variables. Let Xn(j ) be the j-th largest in absolute value among X1, ..., Xn , and Xnj the j-th smallest among X1, ..., Xn . Thus |Xn(1)| > |Xn(2)| > · · · > |Xn(n)| and Xn1 < Xn2 < · · · < Xnn. {Xnj } is called order statistics. Let Sn= /Sigmaj =1nXj. For rn, pn /in \mathbb {N}, let (rn)Sn=Sn- /Sigmaj =1rnXn(j), and (rn,pn )\widetilde{S}n=Sn- /Sigmaj =1rnXnj - /Sigmaj=n-pn+1nXnj. These are called trimmed sums and this procedure of excluding extreme terms from the original sum Sn is called trimming. The question is how trimming affects to the asymptotic behavior of the trimmed sums.

There are three types of trimming depending on the behavior of rn (and pn ) as a function of n. If rn= r (independent of n or more generally bounded), it is called light trimming; if rn → \infty but rn /n → 0, it is called moderate (or intermediate) trimming; and if rn /nc /in (0,1), it is called heavy trimming. As to the limiting behavior of the trimmed sums, we can consider almost sure convergence such as the law of large numbers and weak convergence such as the central limit theorem. In this article, we survey several important results in this area.

Key words: Trimming, modulus trimming, natural-order trimming, light trimming, moderate trimming, heavy trimming.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 63-82(2004)

Information-approximations to the Joint Distributions of Fluctuating Numbers of Quasi-extreme Order Statistics

Tadashi Matsunawa
(The Institute of Statistical Mathematics;
Department of Statistical Science, The Graduate University for Advanced Studies)
Yoshinobu Nakamura
(Department of Statistical Science, The Graduate University for Advanced Studies)

Asymptotic distributions of n (N )-lower extremes and m (N )-upper extremes of a random sample of size N drawn from a univariate continuous distribution are investigated from the common aspects of direction of information and entropy in various scientific fields. Two kinds of approximation modes based on the Kullback-Leibler mean information are considered. The first one is a directed approximation in the sense of full measured information-negentropy and the other one is an approximation in the sense of modified information. Related approximation errors are evaluated precisely by calculating the K-L information numbers or the modified ones. As a main result it is shown that n/N → 0 (N →\infty) is a necessary and sufficient condition for the n (N )-lower extremes to be asymptotically equivalent to an n (N )-dimensional extreme random vector in the sense of full measured information-negentropy. Weaker results for the asymptotic distributions of n (N )-lower extremes and m (N )-upper extremes are given in the sense of modified information. As an application, the basic limiting theorem of the standardized near extreme order statistics with fixed rank due to N.V. Smirnov is extended to a strong asymptotic theorem for the standardized n (N )-lower extremes in the sense of modified information.

Key words: Quasi-extreme order statistics, information-negentropy, modified information, approximate main domain, qualitative error evaluation, basic standardized extreme value theorem.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 83-92(2004)

Composition of Estimators Based on Characterization by Maximum Entropy Method of a Generalized Pareto Distribution

Toshihiko Kawamura and Kōsei Iwase
(Artificial Complex Systems Engineering, Hiroshima University)

Since a generalized Pareto (GP) distribution was introduced by Pickands, it has been studied by many authors in various fields. However, generally estimation of the parameters of the GP distribution with scale parameter (\sigma > 0) and shape parameter (- \infty <k < \infty) is not easy. Castillo and Hadi stated that when k <-1, the maximum likelihood estimates do not exist, and also for k > 1/2, second and higher moments do not exist, and hence both estimates based on the method of moments and the probability-weighted moment estimates do not exist. This paper propose, for any real number k, a method for estimating parameters based on the maximum entropy method. Furthermore, it is applied for reference data in the proposed method, and numerical computation is carried out.

Key words: Characterizations of the distribution, generalized Pareto distribution, maximum entropy method, point estimation.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 93-116(2004)

Estimation of Larger Quantiles Based on the r Largest Observations

Rinya Takahashi
(Kobe University of Mercantile Marine)
Masaaki Sibuya
(Takachiho University)

Assume that larger values are observed in n unit areas or intervals. To estimate quantiles of small upper tail probability, r largest values of n datasets are used. The asymptotic efficiency of the maximum likelihood estimates relative to that of r =1, are shown in Tables. The depth of small pits caused by corrosion are analyzed along the discussions of the paper.

Key words: Asymptotic relative efficiency, asymptotic variance, generalized extreme value distribution, Gumbel distribution.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 117-134(2004)

Estimation of Human Longevity Distribution Based on Tabulated Statistics

Masaaki Sibuya
(Faculty of Business Management, Takachiho University)
Nobutane Hanayama
(Faculty of Art and Information, Shobi-gakuen University)

(Age, period)-specific data for the oldest-old survivors and deaths are analyzed using the extreme value theory and the limit of longevity dstribution is discussed. The data were obtained from National Oldest-old Survivors List and Population Movement Statistics by the Ministry of Health and Labor in Japan. In applying the theory of extreme value statistics of continuous variables to the analysis of (age, period)-specific tabulated data, we propose a procedure for applying the multinomial distribution model based on probabilities calculated from the generalized Pareto distribution, and another procedure using a continuous pseudo random sample generated by adding random numbers. Furthermore, the piecewise constant intensity model on the Lexis diagram, widely used for such data, is also applied. ML estimates show the finite upper limit of longevity distribution.

Key words: Cohort analysis, maximum likelihood estimation, mean residual life function, national oldest-old survivors list, Piecewise Constant Intensity Model, population movement statistics.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 135-149(2004)

Two Indices for Tail Part Distribution of Population of Extremal Wave Heights: Tail Length and Tail Thickness

Toshikazu Kitano
(Nagoya Institute of Technology)

Both Fisher-Tippett's extreme value distributions and the Weibull distribution are tested as candidate population distributions in extreme wave analysis in the field of coastal engineering. The local maxima or even the annual maxima of significant wave heights don't necessarily promised to follow a theoretical extreme value distribution. This paper introduces two dominant indices, tail length and tail thickness, associated with two or three return values for the purpose of comparing different distributions. These indices are intended to correspond to the scale and shape parameters of the generalized extreme value distribution, whose tail is regarded as that of the target population function at a specific return period. By using a return value together with the tail length and thickness indices, the difference between distributions can be easily examined quantatively. A practical example of analysis using these indices is demonstrated by a hindcasting wave data set. These indices are also applicable in other fields of extreme analysis by employing not only the extreme value distribution but also the others as the candidates of population distribution.

Key words: Quantile, Weibull distribution, GEV, significant wave heights.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 151-173(2004)

Multi-site Wind and Earthquake Hazard Analysis via Multivariate Extreme Value Distribution

Jun Kanda and Kazuyoshi Nishijima
(Graduate School of Frontier Sciences, University of Tokyo)

Wind loads and earthquake loads are two major loads considered in structural design in Japan. Wind hazard analysis and earthquake hazard analysis provide essential information to structural engineers. It is common to estimate intensities of wind speeds and earthquake ground motions in a probabilistic manner, but existing hazard analyses are commonly available only for a single site. However, for optimum design for a group of buildings or disaster mitigation for an area, correlation of loads between any two sites have to be considered. In this paper, such spatial correlation characteristics are discussed by utilizing the multivariate extreme value distribution.
For the wind hazard analysis, we quantitatively examined wind hazards in the Kyushu area, where typhoons tend to attack more frequently, than in the Kanto area in terms of the dependency function at sites. For the earthquake hazard analysis, we dealt with those at various sites in the Kanto area. Finally, we compared the degree of spatial correlation in case of wind and earthquake. All the results were found to be consistent with existing engineering observations.

Key words: Multivariate extreme value distribution, spatial correlation, dependency structure, multi-site, hazard analysis.

| Full text pdf | Back |

Proceedings of the Institute of Statistical Mathematics Vol. 52, No. 1, 175-187(2004)

Low Probability Breakdown Voltage Estimation for the New Step-up Test Method

Hideo Hirose
(Faculty of Computer Science & Systems Engineering, Kyushu Institute of Technology)

The conventional step-up test method has been used to estimate the impulse breakdown voltages used in the field of electrical insulation engineering. This method uses two-valued discrete data according to the result of insulation broken or not-broken information. We can now accurately measure impulse waveforms by recently developed fast memorable measuring equipment. With this equipment, we can observe the continuous breakdown voltage data, resulting in more accurate breakdown voltage estimates. We call this method the new step-up test method. This article discusses the optimum test method in both the conventional and new test methods, and compare the estimating errors of the low probability breakdown voltages in both methods.

Key words: Impulse breakdown voltage, step-up method, new step-up method, step-up distance, normal distribution, Weibull distribution, gumbel distribution.

| Full text pdf | Back |