A Prospect of Earthquake Prediction Research

 

Yosihiko Ogata

 

 

Abstract.

Earthquakes occur on complex faults under various different scenarios of preparatory processes and under uncertain stresses in the earth crust. All of these cannot be seen directly. The deterministic earthquake prediction that is coveted by people is difficult. To predict the occurrence, comprehensively taking these elements into consideration, stochastic prediction cannot be avoided. However, it accompanies a large uncertainty in identifying whether abnormal phenomenon is a precursor of a large earthquake or not, as well as urgency to the earthquake. Discovery of potentially useful facts for earthquake prediction is not perfect unless their quantitative modeling of risk is not involved. This manuscript describes a prospect of the earthquake predictability research to realize a practical operational forecasting in near future.

 

1. INTRODUCTION

1.1 Progress in Geophysics and Earthquake Prediction

Public expectations about the earthquake predictability are too excessive, and on the other hand, the disappointments for the current situation are too big. However, until half a century ago, we did not know the cause of the earthquake. Seismologists tried grab a clue somehow, and statistical seismology had played a major role in earthquake research by that time to reveal some regularity and examine other phenomena that were statistically associated with their occurrence (Aki, 1956).

Thanks to the remarkable development of solid earth science from the late 1960s, our knowledge of the earthquake phenomenon has increased significantly. The relevant data is also growing by leaps and bounds, as the study of earthquakes has progressed remarkably in geophysics. After every major earthquake, they elucidated many facts what mechanisms were.

However, even though detailed analysis and discussion have been done, diversity and complexity of the earthquake phenomenon has been noticeable. In fact this is unfortunate for deterministic earthquake prediction, because it must be exhaustively taken into account of all processes (scenarios) of earthquake diverse and complex, in order to realize the prediction of an earthquake by faithfully reflecting their physics.

 

1.2 CSEP project and its aim

It has been growing momentum that, instead of seeking for a magic bullet for an earthquake prediction, seismologists should develop a steady earthquake predictable research in an organized manner. So we are underway in the international cooperative study among major earthquake countries to explore the possibility of earthquake prediction called as "Collaboratory for the Study of Earthquake Predictability (CSEP; Jordan, 2006) ". An immediate objective of the project is to encourage the development of statistical models of seismic activity, and evaluate their predictive performances in terms of probability. It also aims to develop a scientific infrastructure to evaluate statistical significance and "probability gain" (Aki, 1981) of various methods to predict large earthquakes using observed abnormality such as seismic activity and electromagnetic phenomena or crustal movements. Here, the probability gain means ratio of the predicted probability relative to the underlying earthquake probability. This is an important concept so that I will discuss later.

In fact, some techniques of predicting earthquake have been proposed based on anomalies of various types, but are under constantly controversy over their effectiveness (Jordan et al, 2011). Therefore, it is necessary to evaluate the predictive power objectively. Otherwise, it is preoccupied with something barren controversy.

First, in order to give a better prediction of the probability, the CSEP tries to establish standard models that conform to the various parts of the world, repeating the revisions. The "likelihood" is used as a measure of the prediction results with reasonable ones (cf., Bortzmann, 1878; Akaike, 1985). If someone claims that the new prediction model has incorporated possibly useful information compared to the standard model, it should be evaluated whether it improved predictive power. Earthquake forecasting model is that should evolve in this manner.

The author has been mainly working so far, in the research towards the elucidation and prediction of abnormal seismic activity. On this occasion, I would like to discuss in more details of the contents in the above.

 

2. PROBABILISTIC SYNTHESIS OF PRECURSORY PHENOMENA

2.1 Earthquake Related Datasets

The hypocenter catalog includes the records of earthquake generating position (latitude, longitude, depth), and the magnitude of the earthquake. According to the current seismology, an earthquake is due to the rapid destruction of the rocks of the Earth's interior. Looking at the big picture, this destruction is that rock on both sides of  the fault plane move out of alignment. The position of the earthquake listed in the catalog hypocenter means the starting location and the start time of the destruction of such destruction. Moreover, we can make use of catalogs that recorded the orientations of fault planes of relatively large earthquakes.

Among the various types of geophysical data, hypocenter catalog is the data with a large amount that is recorded over a longest period of time. Earthquake country of the world has its own edit the catalog. For example, the Japan Meteorological Agency compiles earthquakes of Japan. The world earthquakes are edited by the International Earthquake Center (ISC) and United States Geological Survey (USGS). There is a Global Centroid Moment Tensor (CMT) catalog that has been edited in a geophysical group of Harvard University as a typical earthquake containing information of earthquake faults in the world. Now, since WEB functions have well developed, we can also look at real-time data source.

 

2.3 Prediction model of time evolution of seismic activity

So, what should we do to advance the evaluation of these criteria at first, using earthquake catalogs? Many small earthquakes occur frequently, but they are not completely chaotic, way of their occurrence obeys statistically certain laws. First, Gutenberg and Richter (1944) found that the number of earthquakes increased (decreased) exponentially as the size (magnitude) of earthquakes decreases (increases), respectively. The typical aftershock frequency decays according to a reverse power function in time (Omori, 1894; Utsu, 1961, 1969). Total number of aftershocks is exponentially proportional to the magnitude of the main shock (Utsu, 1969; Utsu et al., 1995). From these laws, we can predict the standard base-line probability of earthquakes, including large ones, of a region from time series of present and past earthquakes.

 

2.4 Observed abnormalities and their precursors identifications

Of course, in order to predict a large earthquake with a high probability gain, comprehensive study of anomalous phenomena and observations of earthquake mechanism is essential. However, when something abnormal is found, the identification of whether or not it is the precursor of a large earthquake it is not easy.@

By the appearance of an anomaly, it may be impossible to determine whether the anomaly is precursory phenomenon to a large earthquake or not. Nevertheless, we may become able to say that, as compared with those of the reference probability, the probability of occurrence of large earthquakes has increased to a certain extent in a certain period and a certain region. In this way, it is necessary to estimate the uncertainty of the nature and urgency of the precursor to the major earthquake of abnormal phenomena. For this purpose we must study a large number of anomalous cases for potential links to large earthquakes.

Thus, how to incorporate as the basis of this information, or to realize the prediction model of the probability of exceeding the underlying probability; it is important to us.

 

2.5 Improving the probability gain by looking for a statistically significant phenomenon

We should pursue more possible algorithms that have predictive power of a large earthquake by finding specific developmental patterns from seismic catalog. So far, although many do not, the alarm-type earthquake prediction (Keilis-Borok et al., 1988, 1966; Rundle et al, 2002; Shebalin et al., 2006; Sobolev, 2001; Tiampo, et al., 2002a) have been proposed based on the pattern of seismic activity, and some of them are operationally implemented, and were notified by e-mail, or was published in an official document (the Center for Analysis and Prediction, SSB, China; 1990-2003) . In addition, many papers carried out their own earthquake prediction after the event happened. Some of them were statistically significance,

However, unfortunately, their average probability gains were at most several times of the probability of the case without such information. These forecasts alone are not enough for disaster prevention. On such evaluations of the alarm-type predictions, readers refer to the papers by Zechar and Zhuang (2010), Jordan et al (2011), Zhuang and Ogata (2011), and Zhuang and Jiang (2012).

I myself also examined whether certain abnormal phenomena are related to the changes in the rate of earthquake occurrence. Some of them are confirmed to be statistically significant. For example, we analyzed causal relationship between earthquake series from two different regions (Ogata et al., 1982; Ogata and Katsura, 1984). We also examined periodicity (seasonality) of earthquake occurrences (Ogata and Katsura, 1984; Ma and Vere-Jones). Those issues had been frequently discussed in statistical seismology, but it was difficult to analyze such correlations in the conventional method because earthquake clustering feature leads to an incorrect result (Aki, 1956). On the other hand, we found it effective to apply statistical models of stochastic point processes incorporating a clustering component in them (see the review paper, Ogata, 1999, and the references therein). These models can also be applied to examine whether or not various geophysical anomalies are statistically causal as the precursory of a forthcoming large earthquake.

We have to pay following attention. Suppose that significant correlation is observed between the two series of events. However, it is insufficient from the standpoint of prediction, and it is necessary to identify the causality. For example, let us discuss about the data of unusual intensities of ground electric potential by day that were observed in the vicinity of Beijing, China, over 16 years from 1982. The issue is whether or not these were useful as precursors to strong earthquakes of magnitude 4 or larger. It may be that the electricity anomalies were aftereffect of the strong earthquakes. However, by comparing the goodness-of-fit model by the AIC, the anomalies were statistically significant as precursors to the earthquakes (Ogata and Zhuang, 2001; Zhuang et al., 2005). However, as they do not have very high probability gain, they alone are not enough to become a practical prediction.@

We know that the occurrence probability of a big earthquake is extremely small compared to that of a small earthquake (Gutenberg and Richter, 1944). There are also regional differences between such probabilities. At present, such a large earthquake probability is estimated from the frequency distribution of magnitudes of earthquake data in a region. Alternatively, the estimation is made using recurrence times of a large earthquake on an active fault. In the past, time independent risk (i.e., stationary Poisson process model) had been estimated, and the intensity had been classified by degree of fault activity. Time independent probability (i.e., stationary Poisson process model) had been considered in old days where the intensity had been classified by degree of fault activity. To warn the public for example, Japanese seismologists often said "not strange even if it occurs now" without giving a probability.

After the 1995 Hyogo-ken Nanbu Earthquake, the Earthquake Research Committee (ERC) of the Government has been adopted a renewal process model to predict time-dependent probability estimated based on the last earthquake. This prediction implementation was preceded by California.  It is an improvement such that the probability gains of the predictions owing to the renewal process model compared to the Poisson process model prediction are only around 1.7 times in California, according to Jordan et al (2011).

However, the probability of the ERC prediction during 30 years period of each active fault is very small. The probability per day is even smaller, even if it is the rupture on a plate boundary. Therefore, in addition to that, the use of various data of potential precursory anomalies is desired.

As described above, it would be difficult that only an individual precursory anomaly can give a forecast of the high probability, but the forecasting probability can be enhanced if some anomalies are simultaneously observed. By a variety of observations, looking for anomalies that produce medium-term forecast, short-term, estimate the probability of each prediction, they also predicted that the combination is a promising solution (Aki, 1981; Utsu, 1979, 1982). For example, identification of the foreshocks and seismicity quiescence belongs to short- and medium-term forecast, respectively.

 

2.6 Short-term forecasting (1): probabilistic identification of foreshocks

There are foreshocks to something we should take advantage of short-term forecasts. Although the foreshocks observed considerably, this is a matter to become aware after a major earthquake happened. Nevertheless, when earthquakes started happening in a local, it is serious concern of folks to know whether these are a precursory of a significantly larger earthquake or not. The goal is to determine statistically from the data of earthquakes that are happening in this ongoing, to predict the probability of foreshock-type. Since we are using the identification information of the magnitude sequence and degree of  hypocenter concentration, in a composite manner, the probability gain of the prediction is heightened. In addition, the short-term prediction in itself, probability gain is quite high. Because there is a certain amount of progress in this study (Ogata et al, 1995, 1996; Ogata, 2011a, b; Ogata and Katsura, 2012), I expect that these will be put into practical use in real time in near future.

 

2.7 Short-term forecasting (2): probabilistic forecasting of aftershocks

After a big earthquake occurs, the Japan Meteorological Agency and the U.S. Geological Survey (USGS) in California undertake the operational probability of aftershocks. Computational methods by the maximum likelihood procedure for the Omori-Utsu aftershock decay formula have been calculated together with the Gutenberg-Richter law of magnitude-frequency of aftershocks (Utsu, 1965; Aki, 1965; Ogata, 1983b).

However, they forecast the probability from the time after the first day elapsed. This is due to observational difficulties of smaller aftershocks during the early period after the mainshock. Within the first day after the mainshock, in fact, more than a half of the entire large aftershocks have already occurred. Therefore, despite the adverse conditions of data collection, it is desired to give a probabilistic forecast aftershock as soon as possible within 24 hours mainshock for mitigating secondary disasters in the affected areas. For this purpose, it is necessary to estimate the time-dependent missing rates (or detection rate) of aftershocks (Ogata and Katsura, 1993, 2006; Ogata, 2005c), which enables real-time probabilistic forecast from immediately after the main shock (Ogata, 2005c; Ogata and Katsura, 2006).

Similarly, the probability forecast of seismic intensity at a local is possible. Namely, the Ishimoto-Iida formula (Ishimoto and Iida, 1939) of maximum amplitude seismographs of earthquakes also follows exponential distribution like the Gutenberg-Richter formula. This formula together with the Omori-Utsu aftershock decay is combined with detection rate of aftershocks for the forecasting intensities during the early period.

 

3. EARTHQUAKE DYNAMICS AND EARTHQUAKE TRIGGERING

3.1 Interaction Between Earthquakes

In order to explain the chain of earthquakes as well as the quiescence of the activity, we need the physical concept of the Coulomb failure stress. The interior of the crust and upper mantle lithosphere is distorted under stress that increases steadily in a certain direction. Thus, the lithosphere can be considered as an elastic body in the long run. Fault planes are chinks within lithosphere, or they are plate boundary interface. There are numerous faults ranging from very small sizes to the big ones. A fault size is related to magnitude of an earthquake when it slips. Fault planes are oriented in various directions. For each fault plane, stress tensor in the lithosphere is decomposed into the two components; namely, shear stress works in the direction of shifting the fault, while the normal stress works pressing the fault plane. The orientation of each fault plane will vary the amount of these ingredients. Critical condition is determined by the following Coulomb failure stress.

CFS = (shear stress) – (friction coefficient) x (normal stress + pore fluid pressure)

The fracture stress increases at a constant rate over time, and when Coulomb failure stress exceeds a certain threshold, the fault slips dramatically causing an earthquake. Then, the stress drops to a certain value, and is accumulated again. The accumulation of the stress takes a period of decades to recur a large earthquake on the plate boundary, and takes a period of thousands of years to recur the slip on the inland active faults.

In recent years, a sudden change of much stress has been attracting many seismologistsf attention. When an earthquake occurs near, displacement of the fault brings sudden Coulomb stress changes (CFS) in the crust. They decrease or increase, depending on the orientation of each fault plane of the peripheral portion of the fault system. On fault of increased CFS, an earthquake occurs earlier than expected. On fault of decreased CFS, the occurrence procrastinates. If there are many faults of similar orientations dominating the region, we expect seismic activation or seismic quiescence in there.

Here, the fluid pressure of the gap fault related to CFS is usually a constant value. However, its changes may play a major role. For example, the pressure changes in the fluid magma gap affect swarm activity in volcanic area (Dietrich et al., 2000; Toda et al., 2002). Another example is that earthquakes are induced due to the increase pore pressure in a fault system (Hainzl and Ogata, 2005). This is caused by drastically heavy rainfalls, or by strong seismic waves. Seasonal nature of seismic activity is also studied (annual change). Statistical model and its application to validate data from the earthquake-induced phenomena such, reference is made to the review paper (Ogata, 1993, 1999), for example. Also, see the study of seismicity changes that were induced by injection of water using the ETAS model (Lei et al., 2008, 2011).

 

3.2 Predicting seismic activity in the peripheral area by sudden stress changes

In order to explain the phenomena of induction of earthquakes, or suppressing seismic activity, it has been useful to see the increase or decrease in the Coulomb failure stress (CFS) by a rapid faulting (big earthquake). When a large earthquake occurs, we observe seismic waves or GPS crustal displacement. From the observations, we can solve the fault mechanisms of the earthquake; namely, its size, orientation and vector of the fault shift. Okada (1992) made a computer program to perform the calculation of CFS in a receiver fault system from the source fault has become, so the studies on the induction of earthquake based on CFS recently become popular. See Special Issue volumes on this subject edited by Harris (1998) and Steacy (2005), for example.

For example, Ogata (2004b) examined regional increase or decrease of the CFS in southwestern Japan inland, by massive earthquake of the 1944 Tonankai earthquake east of M7.9 and the 1946 Nankai earthquake of M8.1. Conventionally, some seismic quiescence here was considered as either the genuine precursor or the artifact due to incompletely detected earthquakes during the last war period. The positive and negative CFS is compatible well with seismic activation and quiescence, respectively. In particular, that paper classified the seismicity anomalies into pre-seismic, coseismic and post seismic before and after the massive earthquakes. The scenarios that are classified in this study might be helpful in interpreting seismic activity in western Japan before the next great earthquakes along the Nankai Trough.

 

3.3 ETAS model and seismic activity

Interactions among earthquakes are generally fairly complex. Once an earthquake occurs somewhere, the CFS of fault system adjacent to the fault is extremely increased, and many earthquake are induced. These are called aftershocks in most cases. Some of them are also induced outside the aftershock region, and they are called the aftershocks in broad sense. A big change in stress brings many aftershocks, and small change can even induce aftershocks to some extent. However, we cannot see the complex fault system in the crust. Hence, detailed calculations of the stress changes are difficult and impractical.

Therefore, statistical model to describe the actual macroscopic outcome of the interactions is required. For example, the ETAS model that consists of the empirical laws of aftershocks quantifies the dynamic forecasting of the induced effects. By fitting to the selected data from the catalog earthquake, the ETAS model determines their parameters by the maximum likelihood method. Thus we can predict the incidence of the earthquake, conforming to regional diversity.

By the way, the friction law of Dieterich which is constructed based on rock fracture experiment with controlled stresses, has some links to statistical  laws of earthquake occurrences (Dieterich, 1994). Namely, it reproduces the temporal and spatial distribution of attenuation rate of aftershocks such as the Omori's law. However, because of the seismicity diversity, predictions adapting well to the development of seismic activity seems to be difficult at the moment.

Among the plotting figures to indicate the seismic activity from the data source, very often used ones include a plot of earthquake magnitudes series against occurrence times (M-T plot), and a plot of cumulative number of earthquakes against the occurrence time (the cumulative function). The seismicity transition can hardly be understood simply by looking at these plots of earthquake series. These show a complex form generation due to successive occurrences of the earthquakes (clustering). The clustering feature was the main difficult factor against the application of conventional statistical tests that assume the uniformly random earthquake generation (stationary Poisson process) as the null hypothesis. The complexity due to the clustering feature makes it difficult to reveal anomalies of seismic activity due to the subtle stress changes; hence, we may have missed the various signals.

For this reason, seismologists devised various de-clustering methods that leave isolated earthquakes and the largest earthquake in a clustering group (the mainshock), deleting the other earthquakes. Based on the de-clustered data, statistical significance of seismic quiescence was tested against the Poisson process. Sometimes, however, the result of the analysis depends on the choice of the criteria of the adopted de-clustering algorithm (van Stiphout et al., 2012). Hence, they will have a haunting worry whether or not the result is caused by artificial treatment method. In addition, de-clustering methods result in significant loss of information since it throws away a large amount of data from the original catalog.

The ETAS model uses the original earthquake data without de-clustering. As I mentioned before, the ETAS model is a point process model configured so as to conform to the rule of thumb that has been accumulated in various studies, such as aftershocks in Japan and the time evolution of the seismicity rate. We can capture the regional characteristic of earthquake occurrences that can be called the face of seismic activity, so that this model has been accepted as a standard model of the ordinary seismic activity by seismologists. The ETAS model can detects a significant deviation from the normal activities using it like a "yardstick". This is a new and unique approach alternative to de-clustering. Incidentally, stochastic de-clustering method has been proposed by using the space-time ETAS model (Zhuang et al., 2002). Interpretation of this algorithm is clear in stochastic sense.

 

4. SEISMICITY ANOMALIES

4.1 Seismic quiescence

Therefore, we measure deviation of actual cumulative number of earthquakes compared with theoretical cumulative function of the earthquake that is indefinite integral in time of a predicted rate function of the ETAS model. When actual earthquake occurrence rates systematically lowered in comparison with the predicted incidence of by ETAS model, I call the phenomenon the relative quiescence (Ogata, 1992). The relative quiescence lasted for a number of years were observed in a broad region before M8-class great earthquakes in and around Japan (Ogata, 1992; Ogata et al., 2003a). Similar phenomena were observed before M9-class mega-earthquakes in the world.

The authors (Ogata, 2009, and reference therein) reported in the Coordinating Committee for Earthquake Prediction of Japan so far, such as the example of the analysis of the seismic quiescence which was published in academic journals, is as follows:

i1jStatistical analysis of seismic activities in and around Tohoku District, northern Japan, prior to the large interplate earthquakes off the coast of Miyagi Prefecture;i2jSeismicity changes and stress changes in and around the northern Japan relating to the 2003 Tokachi earthquake of M8.0;i3jSeismic activities in and around Tohoku District, northern Japan, prior to the 16th August 2005 interplate earthquake of M7.2 off the coast of Miyagi Prefecture, and the aftershock activity of the M7.2 earthquake;i4jSeismicity changes in and around Kyushu District before the 2005 earthquake of M7.0 in the western offshore of Fukuoka Prefecture;i5jOn an anomalous aftershock activity of the 2004 Niigata-ken-Chuetsu earthquake of M6.8, and intermediate-term seismicity anomalies preceding the rupture around the focal region;i6jSeismic activities in and around Tohoku District, northern Japan, prior to the 16th August 2005 interplate earthquake of M7.2 off the coast of Miyagi Prefecture, and the aftershock activity of the M7.2 earthquake;i7jAnomalies of seismicity and crustal movement in and around the Noto Peninsula before the 2007 earthquake of M6.9; (8) Long-term probability forecast of the regional seismicity that was induced by the M9 Tohoku-Oki earthquake

I have also analyzed aftershocks as well.i9jSeismic activities in and around Tohoku District, northern Japan, prior to the 16th August 2005 interplate earthquake of M7.2 off the coast of Miyagi Prefecture, and the aftershock activity of the M7.2 earthquake;i10jQuiescence of the 2003 foreshock/aftershock activities in and off the coast of Miyagi Prefecture, northern Japan, and their correlation to the triggered stress-changes;i11jAnomalies in the aftershock sequences of the 2003 Tokachi-Oki earthquake of M8.0 and the 2004 Kushiro-Oki earthquake of M7.1 and seismicity changes in the eastern Hokkaido inland;i12jRelative quiescence reported before the occurrence of the largest aftershock (M5.8) in the aftershocks of the 2005 earthquake of M7.0 at the western Fukuoka, Kyushu, and possible scenarios of precursory slips considered for the stress-shadow covering the aftershock area;i13jSeismicity changes in and around Kyushu District before the 2005 earthquake of M7.0 in the western offshore of Fukuoka Prefecture;i14jOn aftershock activity (M7.2) earthquake off the coast of Miyagi prefecture in 2005;i15jAnomalies of seismicity in space and time measured by the ETAS model and stress changes;i16jOn the 2007 Chuetsu-Oki earthquake of M6.8: Preceding anomalous seismicity and crustal changes around the source, and the normal feature of the aftershock activity;i17jSeismicity changes in northern Tohoku District before the 2008 Iwate-Miyagi Nairiku Earthquake;  (18) Aseismic slip scenario for transient crustal deformations around the southern fault before the 2008 Iwate-Miyagi Inland earthquake. (For example, http://www.ism.ac.jp/~ogata/yoti.html, Coordinating Committee for Earthquake Prediction newsletter http://cais.gsi.go.jp/YOCHIREN/report.html. Reference [65] "40 Years of Coordinating Committee for Earthquake Prediction"

Except in the case of (12) that reported seismic quiescence of aftershock activity before a largest aftershock, these are all post-analysis report: the (12) case will be mentioned in some detail in Section 5.2. Also, from the report that I have investigated the cases of 76 aftershocks in Japan, the relative quiescence was observed in 34 cases (Ogata, 2001, Appendix). It will also be mentioned in Section 4.5 how the results of this aftershock study will be used for a space-time probability prediction of a neighboring large earthquake of the similar size to the mainshock.Here, I will mention about remarkable results in aftershock activities of inland earthquakes of magnitude 6 or larger southwestern Japan that occurred during 30 years before and after 1946 Nankai earthquake of M8.1. The 1925 Tajima earthquake of M6.8, The 1927 Kita-Tango earthquake of M6.8, the 1943 eastern Tottori earthquake of M6.2, the 1943 Tottori earthquake of M7.3, the 1944 Tonankai earthquake of M7.9 and the 1945 Mikawa earthquake of M6.8; these occurred preceding the 1946 Nankai earthquake. Among these, the relative quiescence can be seen in all the aftershock activity except for the Kita-Tango earthquake. On the other hand, the 1948 Fukui earthquake of M7.1, the 1955 southern Tokushima Prefecture earthquake of M6.4, the 1961 Kita-Mino earthquake of M7.0, the 1963 Echizen-Misaki-Oki earthquake of M6.6, the 1968 Ehime-Ken Seigan earthquake of M6.6, the 1969 Gifu-Ken Chubu earthquake of M6.6, and the 1978 Shimane-Ken Chubu earthquake of M6.6; these earthquakes occurred during 30 years after the Nankai earthquake. In contrast, in these aftershocks, the relative quiescence was not seen and aftershock activity was on track.

 

4.2 Stress change and seismic quiescence

After the GPS observation network in Japan has established, aseismic fault motions (slow-slips) that could not be detected by seismometers, have been successively identified in the plate boundary region. We can assume such a slow slip to discuss the relationship between the seismic quiescence or activation and a weak stress changes in the crust.

Specifically, assume that slow-slips on a focal fault or its adjacent part have taken place during a period. Then depending on the distribution of orientations of faults around the focal fault, the Coulomb failure stress could decrease and accordingly the seismicity is considered to lower. Such a region is called stress shadow and seismicity shadow, respectively.

We sometimes observe the case where even aftershock attenuation rates decay quicker halfway than those predicted by the Omori-Utsu rate in the earlier period. This is again called the relative quiescence. Such a mechanism can be observed by the significant difference between the prediction and the actual occurrence rates of earthquakes in data analysis by ETAS model. In fact, as seen in most examples I have reported above, the stress shadows coincide with seismicity shadow.

 

4.3 Aseismic slip and seismicity anomalies

Seismicity rate change can become systematically less than predicted in some cases, but become systematically greater than expected in other cases. The latter is called relative activation. The region of relative quiescence and activation in seismicity coincide with that of the CFS decrease or increase, respectively.

An example is seen in the seismicity before the 2004 Chuetsu Earthquake of M6.8. By assuming the precursory slow slip on the source fault, the region of the periphery was divided into four subregions according to the increasing or decreasing change in the CFS. Each of the subregions can theoretically corresponds to the area either to be promoted seismicity or to be suppressed. Therefore, the ETAS model was fitted to the earthquake data from each of the four regions in the period till the 2004 earthquake of M6.8 after September 10, 1997. As a result, there was a clear change in seismic activity in each area. In the stress shadow region, the seismicity became quieter than those predicted by the ETAS. In contrast, in the area of the increased CFS, actual seismicity activated relative to the predicted (Ogata, 2007).

Similarly, the anomaly patterns of seismic activities are in good agreement with that of the CFS increment relative to its trend in the following cases. Namely, these are: the seismicity in and around Kyushu District till the 2005 Fukuoka-Ken-Seiho-Oki earthquake of M7.0 (Ogata, 2010a); the seismicity around the Noto-Peninsula till the 2007 Noto-Peninsula Earthquake of M6.9 (Ogata, 2011d), and seismicity in and around the Tohoku District till the 2008 Iwate-Miyagi-Ken Inland earthquake of M7.2 (Kumazawa et al., 2010).

However, the changes in the activity were not entirely simultaneous. This may mean that aseismic slip is happening continuously or intermittently. But the seismicity anomalies at the investigated regions preceding the 2003 Tokachi-Oki earthquake of M8.0, started at about the same time (Ogata, 2005b). This might mean the start of the precursory slip as the change was strong enough.

 

4.4 Deduce the variation of local stress from spatio-temporal variation of aftershock activity

In many cases of aftershock activity, there are anomalous parts in space-time locations of aftershocks. To see these relatively clearly, we first apply the Omori-Utsu formula for aftershock decay to data of occurrence times, and then convert the times using the estimated theoretical cumulative function. Then, we can look at aftershock occurrences in detail like watching a video whose projection speed was adjusted to capture the motion that is too fast or too slow.

Aftershocks that have occurred normally in the entire aftershock area will distribute uniformly in such a converted time, and any anomaly is not seen. In reliance on this, we examine whether or not the spatial distribution under the converted time remained uniform in each part of the region. If the non-uniform spatial distribution in certain portions of the space-time of conversion is observed, this indicates that there are discrepancies between the official and actual aftershock occurrence of the Omori-Utsu aftershock decay. There are some possible scenarios for such discrepancies. Firstly, secondary aftershocks that follow a large aftershock look remarkable as a cluster of points among uniformly distributed points as a whole. This cluster of secondary aftershocks shows traces of a new rupture to extend the peripheral portion of the fault of the main-shock. When we can see some non-uniform and heterogeneous portions other than the secondary aftershock, exploring the reasons for them is very important.

Anyway, by such time conversion, we can see some non-uniformity where it is relatively clearly abnormal against the Omori-Utsu formula. From a dozen cases of recent large earthquakes, and from the fairly accurate spatial aftershock arrangement, Ogata (2010b) revealed spatiotemporal parts of the relative quiescence. This was considered to be associated with slow-slips in vicinity of a large aftershock fault of destruction or after the mainshock. These can occur in any of coseismic, post-seismic or preseismic slips. 

These were systematically investigated on the assumption that as a result of a change of partial Coulomb failure stress related to these anomalies. In addition, assuming several scenarios of stress-changes due to slow slips, Ogata and Toda (2010) and Ogata (2010b) performed simulations to reproduce seismicity anomalies(relative activation and quiescence) within aftershocks based on the rate/state friction law of Dieterich.

 

4.5 Space-time probability gain of a large earthquake under relative quiescence of aftershocks

When we observe relative quiescence in aftershock activity, the following question arises. What percentage of the anomaly is linked to a large earthquake, how long after, and further where? Since these involve many conditions and hence a number of parameters involved, we cannot easily put out an answer.

But, based on statistical studies of aftershock sequences in Japan (Ogata, 2001), it is possible to say as the following on the probability gain that a large earthquake will occur. First, if a large earthquake occurred somewhere, then probability per unit area that another earthquake of similar size will occur in the vicinity is larger than that which will occur in the distant area. This itself is an simple statistical results, and physically suggests that the neighboring earthquake will be more likely induced by sudden change of stress on the periphery due to abrupt slip of the first earthquake.

If aftershocks become relative quiescence, it becomes more likely that a large aftershocks will occur within and boundary of aftershock area than the case where aftershock activity has remained decay on track as expected by the ETAS or the Omori-Utsu formula. Furthermore If the relative quiescence lasts longer (more than three months, for example), then probability that another earthquake of similar size will more likely to occur in the vicinity of the aftershock area (within 200km, for example) within a period of six years (Ogata, 2001). The probability gain is several times higher than the case where we have no information about the aftershock activity mentioned above.

 

5. SEISMICITY ANOMALIES AND GEODETIC ANOMALY

5.1 Aseismic slip and crustal deformation

If there is a sliding motion on a fault, we can in principle see geodetic changes on the ground around it. The GSI compiles the daily geodetic locations of GPS stations throughout Japan. From the GPS catalog we can calculate baseline distances between GPS stations. The geodetic time series show that contraction or extension of the distance between the stations are basically linear with time. This is because the subducting plate converges with constant speed. 

However, from years 3-4 prior the 2004 Chuetsu Earthquake, the time series of the baseline distance variation around the Chuetsu Earthquake fault was observed systematic deviation from a linear trend (Ogata, 2007). With the exception of the baseline segments connecting to the nearest 3 stations to the fault, these deviations were correlated significantly with the coseismic displacements of the Chuetsu Earthquake.

Similar baseline-distance anomalies between the GPS-stations were observed in and around the focal regions before the following earthquakes. Namely these are the 2011 Tohoku-Oki earthquake (GSI, 2012), the Iwate-Miyagi-Ken Inland Earthquake (Kumazawa et al., 2010), Fukuoka-Ken Seiho-Oki earthquake (Ogata, 2010a), the Noto-Hanto Earthquake (Ogata, 2011d) , and the Chuetsu-Oki Earthquake (Ogata, 2011d). Each deviation of these baselines is consistently explained by slow slips on earthquake source fault.

The above is a post-hoc analysis based on the knowledge of the source fault obtained by coseismic displacements. From predictive viewpoint, it is desirable to be able to estimate such a fault slip in near real-time when each slip is taking place. Indeed, some estimates of slips on plate boundary have been so far obtained by an inversion method from the GPS records. The GSI regularly reported such estimates of coseismic, post-seismic and large-size habitual slips at the Coordinating Committee for Earthquake Prediction.

 Moreover, zones of locked fault (or slip deficit) on plate boundary have also been determined taking account of the rapid slips by large earthquakes and long-term slow slips (Hashimoto et al., 2009). The 2011 Tohoku-Oki Earthquake and the 2003 Tokachi-oki Earthquake eventually occurred in such locked areas (Matsufura, 2012).

However, it is difficult to obtain (especially inland slip) fine image of small slip, although the GPS stations inland are arranged closely enough. This is due to not only GPS observation errors, but also high seismic activity. Since strong earthquakes occur frequently, various effects of the slips are mixed into the GPS records. It is urgent to develop statistical models and methods to separate the such signals.

In any case, to estimate slow-slips more precisely, the combined modeling and analysis of the seismicity anomalies and the geodetic anomalies will be useful. Analyzing the both seismic activities and transient geodetic movement in a number of areas, and locate the whereabouts of aseismic slip is very important, is likely to eventually help to increase the probability of occurrence prediction gain of a large earthquake.

 

5.2 Considering the scenario of an earthquake from aseismic slip

Anomaly of crustal movement and seismic activity, if they are observed, set the scenario assumptions about the fault mechanism and location of the slip precursor for the prediction probability, must estimate the uncertainty of them. We further need to estimate the likelihoods of considered scenarios. These are not easy. A possible method is to consider the logic tree of various scenarios regarding the destruction of the fault system, attaching appropriate subjective or objective probabilities to the tree components as has been done in the long-term prediction of California and Japan. Hence such a scenario ensemble gives a forecast probability. Similarly, there is also a need to consider the medium-term and short-term prediction logic tree of different scenarios.

At the Coordinating Committee for Earthquake Prediction of April 6, 2005, I reported the relative quiescence of aftershocks of the Fukuoka-Ken Seiho-Oki earthquake (Ogata, 2005d). Additionally, I assumed several slow-slip scenarios on the active faults around the aftershock region for the cause of the quiescence. Namely, I looked for a potential slow-slip part on nearby active faults that might have created stress shadow causing relative quiescence in the aftershock sequence.

Among them Kego fault, traversing Fukuoka City, was the large positive CFS by the mainshock rupture, and was in easy circumstances that slow-slip being induced. Furthermore, the seismogenic zone along the Kego fault had already activated before the Fukuoka-Ken Seiho-Oki Earthquake occurred (Ogata, 2010a). Therefore, I set up the slow-slip scenario on this fault that may have happened, and examined the causing pattern of stress variation to the aftershock area. However, in the aftershock area, there was no stress shadow being caused by it. Therefore I think the probability that the slow-slip had occurred on the Kego fault was quite low.

A few potential slow slips in the neighboring active faults could make a stress shadow that covered the aftershock region. However, no large earthquake has occurred there so far.

Actually, after about a month later, the largest aftershock occurred at the south-east end of the aftershock zone. By post-mortem examination based on the information in the fault mechanism of the largest aftershock in addition to detailed aftershock data, it was able to give a detailed scenario. That is to say, by assuming a slow slip into the gap between the fault of the largest aftershock and the mainshock, we can well explain the relative quiescence of activity in the deeper part of the aftershock zone (Ogata, 2006a). At the same time, the slip can also explain the relative quiescence in the induced swarm activity that occurred away from the aftershock area (Ogata, 2006a).

The setting as a prediction of future scenarios is much more vague and difficult, even if we can draw ex post scenario in this way. Moreover we must predict time of occurrence, not just the place. This is more difficult. Even if a slow slip was revealed somewhere, many papers suggest that it does not always indicate a proximate precursory of a the fault corruption.

 

6. CONCLUDING REMARKS

In order to predict major earthquakes with high probability gain, and also to obtain good evaluations showing the progress of such predictions, comprehensive study of anomalous phenomena and observations of earthquake mechanism is essential. Incorporating those to achieve the predicted probability of exceeding the predictions of the typical statistical model seismic activity, study of seismic activity for that must be carried steadily as carrying bricks.

Furthermore to see the urgency and uncertainty of a major earthquake against abnormal phenomena, we must accumulate a lot of research examples. Based on those, we must provide possible prediction scenarios and their likelihoods. In order to adapt well to the diversity of earthquake generation process, it is useful to adopt Bayesian prediction (Akaike, 1980; Nomura et al., 2011). There is also a need to consider region-specific models.

I can say from my experience so far is that the method of statistical science is essential to elucidate the movement leading to prediction of a complex system of global phenomenon. There is a need to forecast using a hierarchical Bayesian model to build a model that reflects the diversity of the vast amount of information on the basis of seismic activity of various data. Space-time model for seismic activity has become more complicated than ever (Ogata, 1998, 2004a, 2011c; Ogata et al., 2003b; Ogata and Zhuang, 2006).

A similar evolution is required for statistical models of geodetic GPS data as described above. So, without the professionals involved in earthquake statistics (statistical seismology) the research itself is difficult. In this way, I believe that statistical seismology is essential for the study of complex systems of the earth. Education for citizens understanding forecast probability of complex phenomena is also the duty of researchers and practitioners who engage in statistical science.

 

 

REFERENCES