Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 5-21(2006)
In the last decade, one of the main issues of financial study has been risk management as typified by Value at Risk (VaR). Furthermore, and increasing number of financial practicians, from institutional investors to day traders are employing intraday data in making investment decisions. In the light of these developments, it is important to analyze the intraday downside risk. In this report we execute an empirical comparison of univariate/multivariate GARCH models based on intraday VaR. As an empirical study we analyze high-frequency data of the following companies listed on the Tokyo Stock Exchange (First Section) from 2 July to 28 September 2001. These are Japanese worldwide leading companies in the automobile industry: NISSAN, TOYOTA and HONDA. We also consider a portfolio that consists of three stocks weakly correlated to one another. However, high-frequency data have different characteristics compared with daily or weekly data. Namely, the lengths of trade intervals are not equal, and the data typically exhibit intraday seasonality. Thus, such data cannot be properly analyzed by using conventional time series modeling techniques. To alleviate these difficulties, we change irregularly spaced time series into regularly spaced ones and adjust the intraday seasonality. Then we examine the forecasting performance of one-step-ahead VaR in the following way. First we estimate the parameters of models for a certain period. Second, we simulate a VaR process through the estimated parameters, and calculate the failure rate of risk, that is, how frequently the actual return exceeds the VaR percentile. Finally, we perform a likelihood ratio test on the exceedance rate based on the simulation outcome.
Key words: High-frequency data, intraday periodicity, multivariate GARCH, intraday VaR.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 23-38(2006)
As the environment changes with the introduction of financial deregulation and new BIS's capital-adequacy requirements, lenders have realized the importance of credit risk management. Default probability estimation has a long history in the field of financial risk measurement. It follows that the estimation method is divided roughly into a statistical model and a stochastic process model. Recently, a logit model with a time-varying financial predictor was introduced to achieve accurate estimation of default probability. However, a logit model estimates default probability only at a future temporary point. When a creditor evaluates the present value of the debenture with the possibility of default risk, the term structure of the default probability is necessary. Moreover, from the viewpoint of asset and liability management, consideration of the term structure may be a critical factor. The purposes of this article are to introduce the hazard model with time varying financial predictor, and to propose a new method for estimating the term structure of default probability. The effectiveness of the proposed methodology is shown through analysis of a Japanese company's default data.
Key words: Bayes approach, functional data analysis, hazard model, time varying covariate.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 39-55(2006)
The recent increase in bankruptcies among exchange-listed companies has drawn attention to the problem of credit risk. Financial institutions cast new light on credit risk, and their pressing problems are to strengthen the capability of credit risk management. In particular, they are putting more emphasis on corporate ratings that reflect credit risk. This paper proposes a method for predicting corporate bond rating changes by using our corporate bond pricing model, and reports empirical results obtained by testing our method with Japanese corporate bond rating data. Our method is to predict corporate bond rating changes based on the deviation of an individual bond's loss from the expected loss of both rating class and industry sector estimated by the corporate bond pricing model, which considers all the corporate bond prices simultaneously. Thus our model makes full use of the information contained in the bond prices, which are correlated with each other. From the empirical results, we find useful evidence that our method performs well in prediction of corporate bond rating changes.
Key words: Corporate bond ratings, term structure of default probability, the expected loss of both rating level and industry sector, generalized least squares.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 57-78(2006)
Weather derivatives are contracts written on the basis of weather indices, which in turn are variables whose values are constructed from weather data. This paper, focuses on pricing of temperature-based weather derivatives, and demonstrate their hedge effect on energy businesses. First, we categorize and review several pricing models, and develop another pricing method based on trend prediction. It is shown that the future price on the monthly average temperature can be derived by fitting the generalized additive model with its nonparametric trend and residuals. We also show that the same idea can be applied to derivative contracts whose premiums are paid in advance when the contracts are carried out. We then analyze the hedge effect of weather derivatives on energy businesses. The historical simulation shows that the weather future is highly effective for hedging electricity revenue when the revenue is proportional to the electricity sales in summer. Moreover, we demonstrate an optimal revenue structure with respect to electricity sales when put options are used. Finally, we perform a similar analysis based on gas sales data using weather derivatives.
Key words: Weather derivatives, generalized additive models, hedge effect, risk management, minimum variance hedge.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 79-103(2006)
We estimate a CPR (Conditional Prepayment Ratio) model of RMBS (Residential Mortgage-Backed Securities) by using real data as Minimum AIC estimates, referring to Richard and Roll (1989). Theoretical prices and risk indices of RMBS, such as duration, convexity and WAL, were calculated with Monte Carlo simulation. Finally, the parameter sensitivities of prices and risk indices were estimated. It was shown that the parameter sensitivity varies according to market interest rate change. This result is important in risk analysis of RMBS. Theoretical Price and risk indices of MBS greatly depend on the CPR model. Generally, there is no model that does not need tuning. Because, as time passes, new information is added, the optimum model changes. If the model parameters are unstable, risk indices of the MBS estimated using the model may be unstable, too. Thus it is useful to know the parameter sensitivity of risk indices beforehand. This makes possible to know how much the risk indices change when parameter changes are anticipated. It can also make the risk management stricter grasping the parameter sensitivity ahead. It is especially operative for MBS, and adequate use of parameter sensitivity helps sound growth of the MBS market.
Key words: MBS, CPR model, duration, convexity, WAL, parameter sensitivity.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 105-122(2006)
Risk management has traditionally been performed for a special field of business through a specified method to the field. Moreover, performance of risk management depends very much on the people involved in the program. In order to overcome these drawbacks, we introduce a holistic approach called the Hierarchical Holographic Modeling (HHM) method to the risk assessment process. This method is a comprehensively recognized method for the identification of risk scenarios of a complex large-scale system. In this paper, the HHM method is applied to a practical IT-system security management project, and a systematic procedure is proposed for the risk assessment process. As a result, HHM structural models are constructed, and a number of risk scenarios in the IT-system security problem are identified. The analytic aspects of the risk scenarios are also studied by using a risk ranking and filtering (RRF) method. Moreover, several issues in the application of the HHM method are discussed, and a series of measures are proposed that are believed to be useful for the effective use of the HHM method in the risk assessment process.
Key words: Risk assessment, Hierarchical Holographic Modeling, IT-system security.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 123-146(2006)
For environmental management, it is essential to identify the sources of contamination for countermeasures to be taken. Several mathematical methods called receptor models have been proposed to identify sources of environmental contamination. However, no method has been established because unknown quantities become unidentifiable when the existence of unidentified sources, i.e. sources whose compositions are unknown, is assumed. It is difficult to solve this identification problem generally. However, the Bayesian method sometimes enables us to obtain practical estimates of unknown quantities that are unidentifiable. This article explains in detail a Bayesian method for estimating the contribution rate of unidentified sources to ambient concentrations of contaminants. The results of application of the method to actual environmental contamination problems are also shown.
Key words: Bayes, chemical mass balance, dioxins, environmental management, receptor model.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 147-175(2006)
The paper studies recursion formulae for the number of claims and the total amount of claims (aggregate claims) in a collective risk model as an insurance application. A brief description of recursive formulae for counting distributions is given. The Panjer or Katz family contains Poisson, binomial and negative binomial distributions as non-trivial distributions. The Sundt–Jewell family extends the Panjer family and includes the zero-modified Poisson, negative binomial, binomial and logarithmic distributions. The Schröter family is more extended and includes the Hermite distribution and the Charlier series distribution. The non-central negative binomial distribution is not included in the Schröter family, but is a member of the Sundt (two-step recursion) family. There can be many distributions that are not included in the above families. The family proposed by Kitano, Shimizu and Ong is a more extended family of distributions and includes the generalized negative binomial distribution by Gupta and Ong, which is an extension of Kempton's full beta model, and Ong's model, in which an inverted gamma (Pearson Type V) distribution is used instead of the gamma distribution as a mixing distribution in Kempton's model. The generalized Charlier series distribution by Kitano et al. also belongs to this family. These distributions with some properties such as the moment generating function and the relation between other distributions are discussed in the paper. Recursive calculation for the probability function of the total amount of claims is also studied, where the number of claims obeys the family by Kitano et al. The result generalizes the recursive calculation in the Panjer, Sundt-Jewell, Schröter, and Sundt (two-step recursion) families.
Key words: Collective risk model, generalized Charlier series distribution, Panjer family, Schröter family, Sundt family, Sundt-Jewell family.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 177-190(2006)
In the conjugate analysis of the von Mises distribution, it is known that the posterior mode is the optimal estimator, that is, the Bayes estimator under the Kullback-Leibler loss function. This paper derives an empirical Bayes estimator by estimating the dispersion hyper-parameter of the von Mises prior distribution under the assumption that the location hyper-parameter is known. The estimation of the dispersion hyper-parameter is based on the method of moment for the marginal density. We compare the risk of the empirical Bayes estimator with that of the maximum likelihood estimator (MLE) by a numerical-simulation method. In the practical situation, we find that the empirical Bayes estimator is far superior to the MLE in high-dimension cases.
Key words: Conjugate prior, posterior mode, Pythagorean relationship.
Proceedings of the Institute of Statistical Mathematics Vol.54, No.1, 191-206(2006)
The advent of the “information age” has led to a great diversity of individual views and has created an environment in which individuals' beliefs can change rapidly. Given these rapid changes, it is necessary to conduct frequent surveys in every area of the social sciences. However, survey response rates have gradually decreased due to privacy concerns and lifestyle changes. There are therefore increased calls for new survey methods that can cost-effectively produce representative samples. Web-based surveys have been devised in various areas. However, the conventional Web-survey method of purposive sampling has drawn criticism for its lack of representativeness. In this study, we applied a propensity-score adjustment method to Web-based surveys to predict Japanese general social surveys (nation-wide random-sample surveys). We propose a new method for selecting covariates and show that the propensity-score adjustment by this method is more effective than that using conventional demographic variables as covariates for estimating propensity scores.
Key words: Propensity score, sociological surveys, nonrandom sampling, Web survey, covariate adjustment.