|
|
|||
|
Statistical Seismology Seminars “Œv’nkŠwƒZƒ~ƒi[ F Updated on 3 March 2025 NEW! ‘æ100‰ñ
u‰‰ŽÒF Mirwald, Aron iETH
Zurich (ƒ`ƒ…[ƒŠƒbƒqH‰È‘åŠwEƒXƒCƒX), Swiss Seismological
Service E‘åŠw‰@¶j
“ú@ŽžF@2025”N 3ŒŽ10“ú(ŒŽ)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ TitleF@Analysis of large earthquake sequences
in Japan: Significance of spatial and temporal patterns of the b-value AbstractF@ The
b-value of the Gutenberg–Richter law describes the relative frequency of
earthquake magnitudes, and is often observed to vary over time and space.
Such variations have been reported to be caused by several factors, including
tectonic stress, depth, and fore] and aftershock activity. The b]value is
highly relevant in the context of earthquake forecasting because it directly
translates into probabilities of future large earthquakes, making it a key component
in state]of]the]art earthquake hazard and forecasting models. The
nature of reported b-value variations is still controversial in the
scientific community. While some already use b-value variations for practical
applications, others find that changes in b-values do not represent a
reliable indicator of physical quantities or future seismicity. In order to
contribute to the ongoing scientific debate and clarify the nature of b-value
variations, we recently developed a method to quantify the significance of
the b-value variation (Mirwald et al., 2024), called b-significant. In
this work, we systematically investigate the b-value variation of large
earthquake sequences in Japan by applying the b-significant method. First, we
test if the b-value in large earthquake sequences vary more than expected due
to random chance. Second, we explore if there are any consistently occurring
spatial patterns of b-values before or after large earthquakes. ‘æ99‰ñ
“ú@ŽžF@2024”N10ŒŽ
31“ú(–Ø)
13F30 – 15:30
ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“
-1 u‰‰ŽÒF Dr. Hsieh, Ming-Che (ŽÓ –Á“N) iEarthquake
Disaster & Risk Evaluation and Management Center (E-DREaM), National
Central University, Taoyuan, Taiwan EAssociate Research Fellow / “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO‘l‹qˆõy‹³Žöj TitleF@Toward Real-Time
Ground-Shaking-Intensity Forecasting Using ETAS and GMM: Insights from the Analysis
of Recent Large Earthquake Sequences in Taiwan, and Its Potential
Applications AbstractF@ Earthquake
forecasting, combined with precise ground-shaking estimations, plays a
pivotal role in safeguarding public safety, fortifying infrastructure, and
bolstering the preparedness of emergency services. This study introduces a
comprehensive workflow that integrates the epidemic-type aftershock sequence
(ETAS) model with a pre-selected ground-motion model (GMM), facilitating
accurate short-term forecasting of ground-shaking intensity, which is crucial
for effective earthquake warning. At first, an analysis was conducted on an
earthquake catalog spanning from 1994 to 2022 to optimize the ETAS
parameters. The dataset used in this analysis allowed for the further
calculation of total, background, and clustering seismicity rates, which are
crucial for understanding spatiotemporal earthquake occurrence. Subsequently,
short-term earthquake activity simulations were performed using these
update-to-date seismicity rates to generate synthetic catalogs. The
ground-shaking impact on the target sites from each synthetic catalog was
assessed by determining the maximum intensity using a selected GMM. This
simulation process was repeated to enhance the reliability of the forecasts.
Through this process, a probability distribution was created, serving as a
robust forecasting for ground-shaking intensity at sites. The performance of
the forecasting model was validated through two examples of the Taitung
earthquake sequence in September 2022, and the Hualien earthquake sequence in
April 2024, showing the workflowfs effectiveness in forecasting earthquake
occurrences and site-specific ground-shaking probability estimations. The
proposed forecasting model can quickly deliver short-term seismic hazard
curves and warning messages, facilitating timely decision-making. The possible
applications of the workflow will also be presented. Reference Ming]Che
Hsieh, Chung]Han Chan, Kuo]Fong Ma, Yin]Tung Yen, Chun]Te Chen, Da]Yi Chen,
Yi]Wun Mika Liao; Toward Real]Time Ground]Shaking]Intensity Forecasting Using
ETAS and GMM: Insights from the Analysis of the 2022 Taitung Earthquake
Sequence. Seismological Research Letters 2024; doi:
https://doi.org/10.1785/0220240180 -2 u‰‰ŽÒF Petrillo, Giuseppe iNanyang Technological University(NTU,
Singapore) EVisiting
Researcher / “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ TitleF@The Impact of Stress Redistribution on
the Spatial and Magnitude Patterns of Future Earthquakes AbstractF@ Predicting
the location and timing of the next major earthquake remains a critical challenge
in seismic forecasting. While it seems logical that future earthquakes will
occur in regions where stress has accumulated from previous events, most
forecasting models do not consider the relationship between stress
distribution and earthquake occurrence. In this study, we utilize a physical
earthquake simulator to investigate how stress redistribution influences the
likelihood of large earthquakes and aftershocks. Our results show that major
earthquakes are more likely to initiate near the edges of previous ruptures,
where stress has not yet been fully released. Additionally, we explore how
this redistribution of stress affects the magnitude distribution of
subsequent aftershocks. These findings provide valuable insights for
improving earthquake forecasting models by incorporating stress evolution,
potentially enhancing their predictive accuracy. ‘æ98‰ñ
“ú@ŽžF@2024”N10ŒŽ
1“ú(‰Î) 14F30
– 17:00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“
-1 u‰‰ŽÒF Dr. Wu, Stephen i“Œv”—Œ¤‹†Š
æ’[ƒf[ƒ^ƒTƒCƒGƒ“ƒXŒ¤‹†Œn Ey‹³Žöj TitleF@Potential of LLMs in seismology AbstractF@ The
application of Large Language Models (LLMs) in seismology holds significant
promise, but direct usage of LLMs as standalone tools often falls short in
addressing the complexity of seismic data and tasks. In this presentation, I
will explore LLM orchestration as a system-building challenge, demonstrating
how integrating multiple LLMs within a structured framework enhances the
potential for accurate and efficient problem-solving in seismology. This
approach allows for specialized task management and better adaptation to
domain-specific needs. To further refine these ideas, we organized a local
hackathon in Japan, inviting seismologists to provide feedback and insights
on the utility of orchestrated LLM systems for seismic analysis. The outcomes
of this collaborative effort will help shape future applications of LLMs in
the field. -2 u‰‰ŽÒF û¥
¬Ë (Zhan,
Chengxiang) i’†‘’nŽ¿‘åŠw (–k‹ž) E‘åŠw‰@¶
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@Neural Point Process: A Modulated
Renewal Model for Temporal Event Modeling AbstractF@ Recent
research has demonstrated that Neural Point Processes (NPPs) are effective in
modeling time series of discrete events. However, the underlying rationale
for this approach remains insufficiently explored. Two key questions arise:
First, how can NPPs successfully model event sequences using only a limited
amount of historical data? Second, does the amount of required historical
data vary depending on the type of point process being modeled? Here, we aim
to address both questions. We propose that NPPs can be interpreted as
modulated renewal models. Specifically, one neural network learns the renewal
process using the current waiting time as input, while another neural network
captures the modulation effect based on intervals between past events. Our findings
further reveal that the amount of historical data needed for accurate
modeling varies across different types of point processes. -3 u‰‰ŽÒF Prof. Zhuang, Jiancang
(¯ Œš‘q) i“Œv”—Œ¤‹†Š
“ŒvŠî”Õ”—Œ¤‹†ŒnE‹³Žöj
TitleF@Quantification of earthquake
predictability AbstractF@ In earthquake
forecasting, there is a significant gap between complete randomness and
complete deterministicity. This presentation begins by discussing how to
quantify predictability and outlining the current state of earthquake
predictability from an information-theoretic perspective. -4 u‰‰ŽÒF Dr. Clements, Timothy iU.S.
Geological Surveyfs Earthquake Science Center in Moffett Field, CA, USAE Postdoctoral fellowj TitleF@A Ground Motion-based Approach to
Earthquake Early Warning and Earthquake Statistics AbstractF@ Real-time
earthquake early warning and forecasting systems have focused on using an
earthquake's location, origin time, and magnitude to forecast either ground
motion or the rate of earthquakes. Here, we suggest using continuous seismic
ground motion to forecast ground motion in the near (seconds) to intermediate
(days) future. I will introduce the GRAph Prediction of Earthquake Shaking
(GRAPES) algorithm (Clements et al. 2024, GRL), a deep learning model trained
to characterize and propagate earthquake shaking across a seismic network,
for use in earthquake early warning (EEW). I will show that, when applied to
earthquake seismology, deep learning is not a black box; GRAPESf internal
activations, which I call gseismic vectorsh, correspond to the arrival of
distinct seismic phases. While
trained on earthquakes recorded in Japan, I will show that GRAPES, without
modification, outperforms the USGS ShakeAlert earthquake early warning system
on the 2019 M7.1 Ridgecrest, CA earthquake. I will then apply earthquake
forecasting techniques to extract the Gutenberg-Richter b-value and
Omori-Utsu parameters from the distribution of recent ground motion recorded
at a single nearby seismometer during an aftershock sequence using a maximum
likelihood approach. We apply our ground motion-based approach to the two
weeks following the 2019 M7.1 Ridgecrest, CA earthquake sequence. ‘æ97‰ñ u‰‰ŽÒF Dr. Šs ˆê‘º(Guo, Yicun) i’†‘‰ÈŠw‰@‘åŠwE•‹³/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj “ú@ŽžF@2024”N 8ŒŽ26“ú(ŒŽ)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ TitleF@Statistical modeling of 3D seismicity
and its correlation with fault slips along major faults in California AbstractF@ This
study applies a 3D Epidemic Type Aftershock Sequence (ETAS) model to the
earthquake catalog in California.
A new depth kernel function incorporating the scaling effect of mainshock
magnitudes is introduced into the model. The results demonstrate that the new
model improves data fitting, and we find a consistent decrease in aftershock
productivity with depth by stochastic reconstruction. We attribute this to
the decreasing fault coupling with depth or active seismic faulting in the
upper crust. Furthermore, we compare background seismicity rates derived from
the new model with the long-term slip rates on the faults in the study
region. We
reveal a linear increase in the seismicity rate with the slip rate on the
logarithmic scale, suggesting the accumulated strain on fault is partially
released by seismic activities. Additionally, a significant correlation
exists between the background seismicity rate and fault segments exhibiting
relatively high creep rates, especially in the transition zones for fault
coupling along the central San Andreas fault. This finding suggests that
interseismic background earthquakes mostly occur in areas with moderate
coupling ratios. ‘æ96‰ñ u‰‰ŽÒF—‹@‹»—Ñ (Lei, Xinglin) i‘—§Œ¤‹†ŠJ”–@lŽY‹Æ‹Zp‘‡Œ¤‹†ŠE㋉Œ¤‹†ˆõ “ú@ŽžF@2024”N8ŒŽ5“ú(ŒŽ)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ TitleF@Can the extent of fluid involvement be
quantified by statistics for fluid-involved or fluid-driven seismicity? AbstractF@ The
complexity of seismic activity originates from the complexity of the fault
system itself and the interactions between faults during earthquakes. When
overpressured fluids are involved underground, the interaction between fluid
diffusion and faulting makes seismicity even more complex. Whether there is a
specific correlation between the extent of fluid involvement and the
statistical characteristics of seismicity is an interesting and worth
exploring topic. This report presents systematic analysis results of several
seismic sequences with significantly different levels of fluid involvement
(including long-term low-pressure water injection and short-term multiple
high-pressure water injection-induced seismicity, as well as natural
earthquake swarms involving fluids of different extents). It summarizes some
basic understandings and highlights several future research directions. ‘æ95‰ñ u‰‰ŽÒFDr. Gerstenberger,
Matt iGNS
Science, New Zealand ESeismologist/NSHM
Leadj “ú@ŽžF@2024”N6ŒŽ10“ú(ŒŽ)
16F00-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ TitleF@The New Zealand National Seismic
Hazard ModelF modelling
uncertainty and implications for its use AbstractF@ In
2022 a significant revision of the New Zealand National Seismic Hazard Model (NSHM)
was released. The revision was a significant change to all components of the
model and resulted in a large increase in hazard across New Zealand. An
important focus of the revision was on the understanding and modelling of
uncertainties that impact that resultant hazard forecasts. Following a traditional probabilistic
seismic hazard analysis approach, all uncertainties are split into two
categoriesF 1)
aleatory, which is defined as inherent randomness in earthquake processes;
and 2) epistemic, which is defined as uncertainty in our knowledge. The
distinction between the two is not always clear cut, yet the choice in how
any parameter is modelled has clear implications for decisions based on the
NSHM. In NZ the NSHM has
significant uptake across government and private sectors. It underpins, for
example, the NZ building code,
large engineering projects, long-term planning and the insurance industry. In
this talk I will discuss how we model the uncertainty, what some of the key
uncertainties are (e.g., non-Poisson occurrence rates, and ground shaking),
and how these uncertainties are passed on and used, or not used, in decision
making. Importantly, I will discuss how the choice in how the uncertainty is
modelled, can lead to the uncertainty being ignored or, potentially,
over-emphasised. ‘æ94‰ñ u‰‰ŽÒFDr. Koh, Jonathan iOeschger Centre for Climate Change
Research(OCCR), University of Bern, Switzerland EPost-doctoral researcher “ú@ŽžF@2024”N4ŒŽ16“ú(‰Î)
16F00-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ TitleF@Estimating shared random effects
between model components in Bayesian hierarchical frameworks AbstractF@ Estimating
shared random effects between model components can help reveal and interpret
common unseen drivers of different facets of the response. This can increase
parsimony and reduce estimation uncertainty, or tackle observation biases due
to heterogeneity in sampling effort that would otherwise lead to biased
statistical inferences. In this talk I will cover two main applications of
this methodology, on modelling wildfire activity (Koh et al., 2023) and the
first arrival date of migratory birds (Koh and Opitz, 2023). ReferencesF
Koh, J. and T. Opitz
(2023). Extreme-value modelling of migratory bird arrival datesF
Insights from citizen science data. Preprint on ArXivF2312.01870. Koh, J., F. Pimont, J.-L.
Dupuy, and T. Opitz (2023). Spatiotemporal wildfire modelling through point
processes with moderate and extreme marks. ‘æ93‰ñ u‰‰ŽÒFã“c ‘ñ i‹ž“s‘åŠw–hÐŒ¤‹†ŠE“Á’茤‹†ˆõ(ŠwUPD) / “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj
“ú@ŽžF@2024”N1ŒŽ30“ú(‰Î)
16F00 -17F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Spatial correlation of the shear strain
energy change and the number of declustered crustal earthquakes in Japan AbstractF@ The
spatial correlation of geodetically estimated strain rate and the background
seismicity has been discussed in several areas worldwide. In Japan, however,
there are azimuthal differences between the geodetically estimated maximum
contraction strain rate and the seismologically estimated maximum compression
stress. Thus, the increase in strain may not necessarily represent stress
accumulation. In this study, we compare the number of background events for
the crustal seismicity in Japan with the shear strain energy change which
considers the background stress field and the current crustal deformation. We
estimated the strain rate field using geodetically-estimated secular
velocities according to the method using the basis function expansion. We
calculated the stress change using the estimated strain tensor assuming the
state of plane stress and evaluated the shear strain energy change
considering the background stress field estimated by Uchide et al. (2022) at 
intervals. We used  shallow ( km depth) crustal earthquakes and applied the
nearest neighbor distance approach to decluster the catalog during 2002—2010. The
shear strain energy change is weakly positively correlated with the number of
background earthquakes, and especially in SW Japan, this correlation is
significantly better compared with the correlation between the strain rate
and seismicity. In some areas, the seismicity rate is remarkably high
compared with that expected from linear regression with shear strain energy
change. These regions may be affected by crustal fluids that possibly weaken
the fault strength. ‘æ92‰ñ
“Œv’nkŠwƒZƒ~ƒi[ • uSTAR-E‚¨‚æ‚Ñ“úˆÉ‹¤“¯ƒvƒƒWƒFƒNƒg‚ÌŒ¤‹†i’»ó‹µ‚ÉŠÖ‚·‚éƒ[ƒNƒVƒ‡ƒbƒvv “ú@ŽžF@2023”N11ŒŽ
7“ú(‰Î)
10F00 - 12F00 E
14F00 - 16F30 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“ 10F00 - 12F00 ‘åŠw‰@¶‚É‚æ‚锕\ -1 u‰‰ŽÒF—› ‰i”g (Li, Yongbo) i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†Š E‘åŠw‰@¶
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@A Three-Dimensional Spherical Epidemic-Type
Aftershock Sequence (3D-SETAS) Model and Its Application in the Alaska
Region
(Li, Y.*, Zhuang, J. and Chen, S.) -2 u‰‰ŽÒF‰¤ Žu•ô (Wang, Zhifeng) i’†‘’nŽ¿‘åŠw (•Š¿)
E‘åŠw‰@¶ / “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@Bayesian inversion for finite fault
earthquake source models and uncertainty analysis
(Wang, Z.*, Zhuang, J. and Wang, D.) -3 u‰‰ŽÒFŽi ˆŸ (Si, Zhengya) i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†Š E‘åŠw‰@¶
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@Bayesian merging of earthquake
magnitudes determined by multiple seismic networks
(Si, Z.*, Zhuang, J., Gentili, S., Jiang, C. and Wang, W.) -4 u‰‰ŽÒF–p Œ’ (Piao, Jian) i–k‹ž‘åŠw, ’†‘ E‘åŠw‰@¶
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@On the spatial response kernel in the
ETAS model
(Piao, J.*, Zhuang, J. and Zhou, S.) -5 u‰‰ŽÒF–Ñ ”J (Mao, Ning) i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†Š E‘åŠw‰@¶
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@Extraction of secular variation
signals and estimation of transfer function from geomagnetic stations
(Mao, N.* and Chen, S.) 14F00 - 16F30 -6 u‰‰ŽÒFœd ƒ (Peng, Hong) i“Œv”—Œ¤‹†Š E“Á”CŒ¤‹†ˆõj
TitleF@Constructing an empirical envelope
function of seismic waveforms for the evaluation of
EEW in Japan -7 u‰‰ŽÒFPetrillo, Giuseppe i“Œv”—Œ¤‹†Š E“Á”C•‹³j
TitleF@Decoding the Puzzle of Earthquake
Magnitude Dependency -8 u‰‰ŽÒFDr. Guo, Yicun (Šs ˆê‘º) i’†‘‰ÈŠw‰@‘åŠw E•‹³
/ “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj TitleF@Detection and Characterization of
Earthquake Swarms in Nankai and Its Association With Slow Slip Events -9 u‰‰ŽÒFDr. Gentili, StefaniaiƒCƒ^ƒŠƒA‘—§ŠC—mŠw‚ÆŽÀŒ±’n‹…•¨—ŠwŒ¤‹†Š EŒ¤‹†ˆõ TitleF@Aftershock forecasting by the NESTORE
machine learning algorithmF
applications to Italy, Slovenia, California, Greece and some preliminary
results on Japan
(Gentili, S.*, Chiappetta, G., Petrillo, G., Brondi,
P., Zhuang, J. and Di Giovambattista, R.) -10 u‰‰ŽÒF‹ Œ¹Œ¹ (Niu, Yuanyuan) i‘‡Œ¤‹†‘åŠw‰@‘åŠw E‘åŠw‰@¶j TitleF@Nonparametric Bayesian inference for ETAS model (Niu, Y.* and Zhuang,
J.) ‘æ91‰ñ u‰‰ŽÒFProf. Guan, Yongtao iDepartment
of Management Science, School of Business Administration, University of
Miami, USA E‹³Žöj “ú@ŽžF@2023”N10ŒŽ3“ú(‰Î)
16F00-17F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Group Network Hawkes Process AbstractF@ In this
work, we study the event occurrences of individuals interacting in a network.
To characterize the dynamic interactions among the individuals, we propose a
group network Hawkes process (GNHP) model whose network structure is observed
and fixed. In particular, we introduce a latent group structure among
individuals to account for the heterogeneous user-specific characteristics. A
maximum likelihood approach is proposed to simultaneously cluster individuals
in the network and estimate model parameters. A fast EM algorithm is
subsequently developed by utilizing the branching representation of the
proposed GNHP model. Theoretical properties of the resulting estimators of
group memberships and model parameters are investigated under both settings
when the number of latent groups G is over-specified or correctly specified.
A data-driven criterion that can consistently identify the true G under mild
conditions is derived. Extensive simulation studies and an application to a
data set collected from Sina Weibo are used to illustrate the effectiveness
of the proposed methodology. ‘æ90‰ñ u‰‰ŽÒFDr. Hsieh, Ming-Che (ŽÓ –Á“N) i‘ä˜p ‘—§’†‰›‘åŠw
’nkЊQEƒŠƒXƒN•]‰¿ŠÇ—ƒZƒ“ƒ^[(E-DREaM) E•›Œ¤‹†ˆõj “ú@ŽžF@2023”N8ŒŽ29“ú(‰Î)
16F30-17F30 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Seismic Hazard Assessment for Metropolises
and Sciences Parks in TaiwanF Analyzing
Ground Motions Using Deterministic and Probabilistic Methods AbstractF@ Seismic
hazard assessment (SHA) is essential for engineering constructions, hazard
mitigation strategies, and earthquake-early warnings. The fundamental of SHA
is based on analyzing empirical ground motion datasets and constructing
ground motion models (GMMs) through parametrization and regression
techniques. However, significant challenges arise due to the absence of (1)
empirical ground motions from local large-magnitude earthquakes and (2) the
incorporation of physical effects, such as source-rupture processes and
complex wave propagation media, into a generalized functional form of GMM. To
conquer these challenges in estimating ground motion, physics-based ground
motion simulation is a powerful tool for deterministic seismic hazard
assessment. In recent years, we have conducted ground motion simulations,
taking great care to consider the characteristics of fault rupture, various
scales of subsurface structures, basin geometry, high-resolution topographic
relief, and more for earthquake scenarios in metropolitan areas and science
parks in Taiwan. These results have been integrated into loss estimation and
practical mitigation strategies. Furthermore, we have developed a short-term
seismic intensity forecasting procedure for earthquake warnings based on the
epidemic-type aftershock sequence (ETAS) and the ground motion model. Unlike
traditional probabilistic seismic hazard analysis (PSHA), which provides the
probability of ground motion exceeding certain levels over 30 or 50 years for
engineering concerns, our forecasting model can deliver warning messages
within a short period to facilitate timely decision-making. An example of the
Taitung earthquake sequence in September 2022 demonstrates the good
performance of forecasting seismic intensity. KeywordsF
seismic hazard assessment, ground motion model, physics-based ground motion
simulation, epidemic-type aftershock sequence ‘æ89‰ñ u‰‰ŽÒFProf. Peng, Zhigang iSchool of Earth and Atmospheric
Sciences, Georgia Institute of Technology, Atlanta, USA E‹³Žöj “ú@ŽžF@2023”N6ŒŽ16“ú(‹à)
16F00 - ƒIƒ“ƒ‰ƒCƒ“ TitleF@What can emissingf events tell us about
earthquake statistics and physical mechanism of earthquake triggering? AbstractF@ Large
earthquakes are generally followed by many smaller events known as
aftershocks. Sometimes they are preceded by foreshocks. While the general
statistical behaviors of earthquake sequences are well known, the physical
mechanisms for foreshock and aftershock generation are still in debate.
Several studies have shown that immediately before or after large
earthquakes, many smaller events are missing in the standard earthquake
catalog, which may hamper our understanding of the physical processes during
this important time period. Here I review recent advances in building more
complete earthquake catalogs using waveform-based techniques such as template
matching or machine learning. I will present several case studies to show how
these improved earthquake catalogs reveal new patterns in the space-time
evolutions of these earthquake sequences. I will end the talk with some open
questions/suggestions that can potentially further improve earthquake
detection immediately before or after large earthquakes. ‘æ88‰ñ -1 u‰‰ŽÒF
Dr. Hainzl, Sebastian
iGFZ German
Research Centre for Geosciences, Germany ESenior
Researcher “ú@ŽžF@2023”N 3ŒŽ
17“ú(‹à) 15F00
-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“
TitleF@Stress-based seismicity modeling AbstractF@ While
the ETAS model successfully describes the first-order characteristics of
short-term earthquake clustering due to earthquake interactions, it can only
model earthquake activation, not unloading effects. To be more realistic,
seismicity models based on estimated stress can be applied. Two widely-used
physics-based seismicity models are the Coulomb Failure and Dieteriches
Rate-State models, which assume pre-existing populations of faults that
respond to changes in Coulomb stress. I will discuss a modified
Coulomb-Failure model in which instantaneous triggering is replaced by a mean
time-to-failure that depends exponentially on the absolute stress value. For
critical initial stresses, we show that the model leads to identical
forecasts as the Rate-State model and reproduces the Omori-Utsu relation for
aftershock decays and stress-shadowing effects. Thus, both previous
stress-based seismicity models can be seen as special cases of the new model.
However, the new stress response model can also account for subcritical
initial stress conditions, which is particularly relevant for induced
seismicity in intraplate regions. Furthermore, I will present the results of
a recent systematic analysis of the dependence of seismicity parameters
(b-value and Omori-parameters) on the calculated stress changes and discuss
the possible use of these relationships in seismicity modeling. -2
u‰‰ŽÒF Stockman, Sam iComputational
Statistics and Data Science, University of Bristol, UK E‘åŠw‰@¶j “ú@ŽžF@2023”N 3ŒŽ
17“ú(‹à) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5@‹y‚Ñ@ƒIƒ“ƒ‰ƒCƒ“
TitleF@Forecasting the 2016-2017 Central Apennines
Earthquake Sequence with a Neural Point Process AbstractF@ Point
processes have been dominant in modeling the evolution of seismicity for
decades, with the Epidemic Type Aftershock Sequence (ETAS) model being most
popular. Recent advances in machine learning have constructed highly flexible
point process models using neural networks to improve upon existing
parametric models. We investigate whether these flexible point process models
can be applied to short-term seismicity forecasting by extending an existing
temporal neural model to the magnitude domain and we show how this model can
forecast earthquakes above a target magnitude threshold. We first demonstrate
that the neural model can fit synthetic ETAS data, however, requiring less
computational time because it is not dependent on the full history of the
sequence. By artificially emulating short-term aftershock incompleteness in
the synthetic dataset, we find that the neural model outperforms ETAS. Using
a new enhanced catalog from the 2016-2017 Central Apennines earthquake
sequence, we investigate the predictive skill of ETAS and the neural model
with respect to the lowest input magnitude. Constructing multiple forecasting
experiments using the Visso, Norcia and Campotosto earthquakes to partition
training and testing data, we target M3+ events. We find both models perform
similarly at previously explored thresholds (e.g., above M3), but lowering
the threshold to M1.2 reduces the performance of ETAS unlike the neural
model. We argue that some of these gains are due to the neural model's
ability to handle incomplete data. The robustness to missing data and speed
to train the neural model present it as an encouraging competitor in
earthquake forecasting. ‘æ87‰ñ u‰‰ŽÒF²“¡‘å—S i‹ž“s‘åŠw–hÐŒ¤‹†ŠEJSPS“Á•ÊŒ¤‹†ˆõj “ú@ŽžF@2022”N12ŒŽ26“ú(ŒŽ)
16F00 -17F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Inventing appropriate solving methods
of hierarchical Bayesian inversions when using regularization priors AbstractF@ Regularization
(e.g. damping, smoothing, and sparsity) is a robust technique to resolve ill
conditions in inverting model parameters more than data. However, it leaves
questions in tuning the hyperparameters that weight data misfits and
regularization losses, namely, the likelihood and prior. Therefore,
hierarchical Bayes naturally follow to evaluate the model parameters and
hyperparameters probabilistically. In this talk, I introduce our study on an
appropriate solving method for hierarchical Bayesian problems. In linear
inverse problems, surprisingly, we find a significant part of standard
approaches fail to invert many model parameters, including orthodox Monte
Carlos and the posterior maximum, despite the use of regularization (Sato
Fukahata and Nozue, 2022). Their problematic behaviors are all analytically
explained by the entropic effect of the model-parameter space, which is cared
in the empirical Bayes, as suggested in an early work of Robbins (1956). The
same issue remains in nonlinear problems, and I will also talk about
numerical methods under development to fix this "bug" of ordinary
Monte Carlos. ‘æ86‰ñ -1 u‰‰ŽÒF Dr. Zhuang, Jiancang
(¯ Œš‘q) i“Œv”—Œ¤‹†Š
ƒ‚ƒfƒŠƒ“ƒOŒ¤‹†ŒnEy‹³Žöj “ú@ŽžF@2022”N 11ŒŽ
14“ú(ŒŽ) 16F00
-16F45 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Evaluating earthquake forecasts with
likelihood based marginal and conditional scores AbstractF@ Earthquake
forecasting consists of the forecasts of the numbers,occurrence times,
locations, and magnitudes of seismic events, and even the correlations among
them. In order to evaluate each component in the forecasting, CSEP provides
varieties of testing criterions but still far from enough. In this study,
starting from the essential meanings of the full likelihood of a point
process, I will explain how to evaluate the marginal likelihoods for each
component in the forecasts and the conditional likelihoods for some components
under the condition that some other components are given. These marginal and
conditional likelihoods are the bases for
the marginal and conditional scores in performance evaluation when multiple
models are used in forecasting. -2
u‰‰ŽÒF Sofiane Rahmani iCenter
of Research in Astronomy, Astrophysics and Geophysics(CRAAG), Algeria E‘åŠw‰@¶j “ú@ŽžF@2022”N 11ŒŽ
14“ú(ŒŽ) 16F45
-17F30 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Time-dependent and spatiotemporal statistical
analysis of Algerian seismicity AbstractF@ Northeastern
Algeria is known by its high seismic activity as reflected by several
hundreds of events occurring every year. Recently, this area has been the
seat of several seismic sequences such as the 2010 Beni-Ilmane earthquake
sequence and the 2012–2013 Bejaia earthquake sequences. On the other hand, it
is also observed that the seismic activity of this part of Algeria is
dominated by swarms, with high concentrations in time and space, from a few
days to several months, ranging from a few kilometers to ten kilometers, and
sometimes showing a migration of several kilometers in several weeks. Our
main objective is to understand the global behavior of this seismic zone, it
is of crucial importance to understand what are the soliciting forces and how
they interact. These rupture processes are still under debate, but can be
deduced from the analysis of seismic swarms or aftershock sequences, as both
share similarities. Using a model corresponding to accurate detection and
relocation, we will study seismic clusters in detail by analyzing it from the
inside by identifying the different families that compose it. In doing so, we
will isolate the aftershocks, induced by the coseismic stress transfer, from
the swarm-like events, and we will analyze separately their spatio-temporal
behavior by a fine statistical analysis. Because the stochastic models, which
include an increasing part of physical reasoning, have been slowly accepted
during the last three decades. The subject of statistical seismology aims at
bridging the gap between physics-based models without statistics, and
statistics-based models without physics. Our objectives are also to
understand what mechanisms are at work in the case of sequences and in the
case of swarms, to see to what extent it is possible to link the observations
and analyses that will be carried out to the regional seismotectonic context,
and to understand what influence this may have for the calculation of seismic
hazard. ‘æ85‰ñ u‰‰ŽÒF¼ì—FÍ i‹ž“s‘åŠw–hÐŒ¤‹†ŠE•‹³ / “Œv”—Œ¤‹†ŠƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg EŠO—ˆŒ¤‹†ˆõj “ú@ŽžF@2022”N10ŒŽ3“ú(ŒŽ)
16F15 -18F15 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Application of the ETAS model to slow
earthquake research AbstractF@ The epidemic-type
aftershock-sequence (ETAS) model is a standard statistical model of fast
earthquake activity. We are attempting to apply the ETAS model to slow
earthquake research. The attempts are focused on the following three topicsF (1)
detection of slow earthquakes, (2) improvement of the ETAS model by
considering slow earthquake activity, and (3) construction of a statistical
model to describe slow earthquake activity. In this presentation, we will
present the results of (1) regarding the elucidation of the slow earthquake
distribution in the Japan Trench and preliminary results regarding (2) and
(3). ‘æ84‰ñ u‰‰ŽÒFœd ƒ(Peng,
Hong) i‹ž“s‘åŠw–hÐŒ¤‹†Š•‘®’nk—\’mŒ¤‹†ƒZƒ“ƒ^[E‘åŠw‰@¶j “ú@ŽžF@2022”N7ŒŽ7“ú(–Ø)
16F00 -18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Characteristics of foreshocks
preceding the mainshocks in Japan AbstractF@ Studying
the foreshocks is an important way to understand the physical mechanism of
earthquakes. In this research, I will use the Japan Meteorological Agency
(JMA) earthquake catalogue from 2001 to 2021 to investigate the
spatiotemporal characteristics of foreshocks for mainshocks. I have three
important conclusions about the earthquakes in JapanF 1)
No dependence of the mainshock magnitude on the foreshock magnitude; 2) No
obvious trend between the foreshock-mainshock distance and the mainshock
magnitude; 3) A decrease in the foreshock-mainshock time difference with the
increased foreshock magnitude. These results of the foreshock-mainshock
sequence seem to be more consistent with the triggering mechanism rather than
the nucleation-controlled process. Thus, I suggest that the ecascade modelf
or rupture-controlled model is more reasonable for explaining the physical
mechanism of earthquakes in Japan. ‘æ83‰ñ u‰‰ŽÒFPetrillo,
Giuseppe i“Œv”—Œ¤‹†Š EŠO—ˆŒ¤‹†ˆõ/ “ú@ŽžF@2022”N4ŒŽ26“ú(‰Î)
16F00 -18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Statistical mechanics models for
seismic occurrence AbstractF@ Earthquake
occurrence is characterized by quite universal scaling relationships, in
particular regarding the events after a big shockF the
aftershocks. Here I discuss a fault model of two elastic interfaces with
different rheology. This model contains a mechanism for the aftershock
occurrence and I will show that our data recover the experimental scaling
relationships at an excellent qualitative level. We also find that large
earthquakes are often anticipated by a preparatory phase characterized by the
occurrence of foreshocks. Finally, I will mention that thanks to these
models, it is possible to carry out extensive simulations to verify the
predictability of seismic phenomena. ‘æ82‰ñ u‰‰ŽÒF–î–ìŒb—C i“Œv”—Œ¤‹†Š ”—E„˜_Œ¤‹†Œn Ey‹³Žöj
“ú@ŽžF@2022”N2ŒŽ1“ú(‰Î)
16F30 -18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@l1 trend filtering based detection of
slow slip events AbstractF@ Slow
slip events characterized by a slower fault rapture compared to regular
earthquakes have been discovered in tectonic zones worldwide and have helped
us understand the surrounding stress environment including megathrust zone.
In this talk, I will present a new detection method of slow slip events using
l1 trend filtering, a sparse estimation technique together with combined
p-value techniques. The proposed method provides not only candidates of the
events but also confidence values for detections. The synthetic tests showed
that our method successfully detect almost all events with few misdetections.
The application to real data in the Nankai subduction zone in western
Shikoku, southwest Japan, revealed our method detected new potential events
in addition to all known events. ‘æ81‰ñ u‰‰ŽÒF’|”ö–¾Žq i“Œ‹ž‘åŠw’nkŒ¤‹†ŠE•‹³j “ú@ŽžF@2021”N11ŒŽ16“ú(‰Î)
16F00 -18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Observation, detection, evaluation and
interpretation of slow earthquakesF
is it episodic or chaotic? AbstractF@ I
introduce observation of slow earthquakes by broadband seismometers in the
Nankai subduction zone. This includes (i) how to detect deep very low
frequency earthquakes (VLFEs) with moment magnitudes of ~3 at the period
range of 20-50 s related to tectonic tremors at a frequency range of 2-8 Hz,
(ii) how to evaluate the reliabilities of individual and total VLFE
detections, and (iii) how to interpret the VLFE activities. I currently focus
on the temporal change in the activity mode from episodic to potential
chaotic during a high-stress loading period, which might be mathematically
and physically interesting. ‘æ80‰ñ
u‰‰ŽÒF’¾@v(Shen, Xun) i‘‡Œ¤‹†‘åŠw‰@‘åŠwE‘åŠw‰@¶j “ú@ŽžF@2021”N10ŒŽ19“ú(‰Î)
16F00 – 18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@Residual Analysis for State Space
Models AbstractF@ This
presentation introduces a novel residual analysis-based algorithm for model
learning and hidden state inference in State-Space Models (SSMs) with a
nonlinear response. An SSM with nonlinear response has linear state equation
while the observation equation includes a linear part and a nonlinear part,
where the information of the nonlinear part is not available. In this study,
a neural network model is used to approximate the unknown nonlinear part in
the observation equation, and an Expectation-Maximization (EM) algorithm is
proposed to infer the hidden state and learn the parameters in both the
linear part and the neural network model, from the given sequences of input
data and observation data. In the E-Step, the posterior mean and covariance
for the system hidden state given the sequences of the system input and
observations is inferred via a Kalman filter-based forward recursion and
Rauch-Tung-Streibel smoother backward recursion. In the M-Step, the model
parameters are optimized according to the inferred hidden state, input data,
and observation data. The M-Step consists of two componentsF a
reconstruction procedure, in which uses the residuals of the linear model to
fit the neural network model, and a parametrization procedure, which
identifies the parameters in the linear part of the state space model. We
apply this newly proposed method to a numerical example and in a case study
of battery capacity estimation. The results show that the proposed method can
achieve better performance on the model learning and hidden state inference
than previously developed tools. ‘æ79‰ñ u‰‰ŽÒF Dr. ‰¤ 婷(Wang, Ting) iOtago Univeristy (Department of
Mathematics and Statistics), New ZealandEy‹³Žöj
“ú@ŽžF@2021”N9ŒŽ21“ú(‰Î)
13F00 –14F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@A time series model for forecasting
earthquake energy release AbstractF@ Large
earthquakes often occur repeatedly. Modelling the history of earthquakes can
help forecast future hazardous events. In this talk, I will present a time
series model that we have developed to study recurrence patterns of
earthquakes. This model captures both the intensity of earthquake occurrence
and patterns of earthquake energy release in time. I will discuss the
forecasts of earthquake energy release using this model. ‘æ78‰ñ u‰‰ŽÒF Dr. ŒF Žqàô(Xiong, Ziyao) i“Œv”—Œ¤‹†ŠE“Á”C•‹³j “ú@ŽžF@2021”N7ŒŽ27“ú(‰Î)
16F00 – 18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@The Research of Long-Term Earthquake
Hazard Estimated from a Modern Catalog AbstractF@ In
this study, to obtain optimal estimates of the earthquake hazard in North
China based on the modern earthquake catalog, we used two variable kernel
function estimation methods, proposed by Stock and Smith, and Zhuang, the
Bayesian Delaunay tessellation smoothing method by Ogata (ODTB), and a newly
proposed incomplete centroidal Voronoi tessellation (ICVT) method, to
calculate the total and background seismic spatial occurrence rates for the
study area. The sophisticated ODTB method is more stable than the others, but
is relatively expensive, in terms of computation demands, whereas Zhuang et
al.fs kernel estimate and the new ICVT method are able to provide reasonable
estimates and easier to implement. We also calculated the spatial variations
of the b-value, using the Bayesian method with smoothness prior proposed by
Ogata. Using comparative analyses and simulation experiments, we show that
all the methods give similar spatial patterns of seismic occurrences. ‘æ77‰ñ u‰‰ŽÒF ŽR“c^Ÿ i‹ž“s‘åŠw–hÐŒ¤‹†ŠE•‹³j “ú@ŽžF@2021”N6ŒŽ24“ú(–Ø)
16F00 -18F00 ƒIƒ“ƒ‰ƒCƒ“ TitleF@IPFxF
a new source determination algorithm for earthquake early warning AbstractF@ An
earthquake early warning (EEW) system rapidly analyzes seismic data to report
the occurrence of an earthquake before strong shaking is felt at a site. In
Japan, the integrated particle filter (IPF) method, a new source estimation
algorithm, was recently incorporated into the EEW system to improve the
source estimation accuracy during active seismicity. The problem of the
current IPF method is that it uses the trigger information computed at each
station in a specific format as the input and is therefore applicable to only
limited seismic networks. This study proposes the extended IPF (IPFx) method
to deal with continuous waveforms and merge all Japanese real-time seismic
networks into a single framework. The new source determination algorithm
processes seismic waveforms in two stages. The first stage (single-station
processing) extracts trigger and amplitude information from continuous
waveforms. The second stage (network processing) accumulates information from
multiple stations and estimates the location and magnitude of ongoing
earthquakes based on Bayesian inference. In 10 months of continuous online
experiments, the IPFx method showed good performance in detecting earthquakes
with maximum seismic intensity >=3 in the Japan Meteorological Agency
(JMA) catalog. By merging multiple seismic networks into a single EEW system,
the warning time of the current EEW system can be improved further. The IPFx
method provides accurate shaking estimation even at the beginning of event
detection and achieves seismic intensity error <0.2 5 s after detecting an
event. This method correctly avoided two major false alarms on January 5,
2018, and July 30, 2020. The IPFx method offers the potential of expanding
the JMA IPF method to global seismic networks. ‘æ76‰ñ u‰‰ŽÒF Dr. í 莹(Chang, Ying) i“ì•û‰ÈŠw‹Zp‘åŠw ’n‹…‹óŠÔ‰ÈŠwŠw‰@E“Á•ÊŒ¤‹†ˆõj
“ú@ŽžF@2020”N1ŒŽ14“ú(‰Î)
13F30 -14F30 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5 TitleF@Differences between mantle wedge
earthquakes and intraslab intermediate-depth earthquakes from spatial b-value
image AbstractF@ Intermediate-depth
earthquakes follow the power law distribution, the Gutenberg-Richter law
logN=a-bM. The b-value shows the proportion of small magnitude earthquakes
relative to the large ones. A large proportion of small earthquakes usually
appears in high thermal anomaly region and low ambient stress status field.
High b-value anomalies in subduction zones have been associated with
dehydration of subducting oceanic crust, which is a plausible mechanism of
intermediate-depth earthquakes, or low velocities indicating magmatic
activities. The analysis of b-value may be a useful tool to investigate the
mechanisms of earthquakes and an indicator of structural difference in
subduction zones, and especially beneficial to subduction zones which have
intensive small magnitude earthquakes. In southwestern Colombian subduction
zone, a high rate of intermediate-depth earthquakes appears in the Cauca
cluster from the earthquake catalog of Servicio Geológico Colombiano.
Previous study of the intermediate-depth earthquakes in the cluster show a
continuous 20-km thick seismic zone dipping to southeast with 33‹–43‹ dip
angle increasing to the south, and two mantle wedge earthquake columns
extending 30–40 km normal to and above the top surface of the subducting
slab. The focal mechanisms of earthquakes in the cluster have various types
and variable orientations of nodal planes. The intermediate-depth earthquakes
in southwestern Colombian subduction zone occur in a spatial variant tectonic
stress field and have complex mechanisms. Intraslab earthquakes generally
have smaller b-value than mantle wedge earthquakes. In the slab, high b-value
anomalies appear in the top layer of the subducting slab and the mantle
wedge. The facts of focal mechanisms, the stress fields, and b-value
anomalies indicate dehydrated fluid involved structures and magmatic
activities. ‘æ75‰ñ -1 u‰‰ŽÒF Dr. —› gåQ(Li, Honglei) i’†‘’nk‹Ç ’n‹…•¨—Œ¤‹†Š
E•‹³j “ú@ŽžF@2019”N 8ŒŽ
28“ú(…) 13F00
-13F45 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5 TitleF@Bayesian assimilation inversion of
gravity anomalies and parameters optimization AbstractF@ It is
well known that the gravity inversion is a classical ill-posed problem. The
parameters of regularization must be introduced for inversion. In this study,
we design a Bayesian assimilation inversion strategy, according to the
subjective blindness problems in the gravity anomaly inversion, which can
optimize balance multi-source gravimetric data, various model constraints and
multiple hyperparameters which related to the accuracies of the observation
data. We employed the Akaikefs Bayesian Information Criterion (ABIC) for the
estimated these trade-off parameters. Based on this novel strategy, we
designed some cases to test gravity inversion using gravity datasets from the
difference measurements with varying accuracy levels. This inversion strategy
can achieve different type gravimetric observations integration primely,
evaluate the observations and prior model constraints weight objectively and
it will have a very bright application prospect in the future. -2
u‰‰ŽÒF Prof. ’ Î(Chen, Shi) i’†‘’nk‹Ç ’n‹…•¨—Œ¤‹†Š
E‹³Žöj “ú@ŽžF@2019”N 8ŒŽ
28“ú(…) 13F45
-15F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5 TitleF@A Bayesian approach of network adjustment
for campaigned gravity surveyF
methodology and model test AbstractF@ The
drift rate of the relative gravimeter differs from time to time and from
meter to meter, and it is inefficient to estimate the drift rate by returning
to the base station or stations with known gravity value frequently in a
campaigned gravity survey for the large-scale region. Unlike the conventional
gravity adjustment procedure which employed a linear drift model, we assumed
the variation of drift rate is a smooth function of the time-lapse, and
proposed a new gravity data adjustment method by means of objective Bayesian
statistical interference. Some hyper-parameters were used to as trade-off to
balance the fitted residuals of gravity differences between station pairs and
the smoothness of the temporal variation of the drift rate. We employed the
Akaikefs Bayesian Information Criterion (ABIC) to estimate these
hyper-parameters. A comparison between results from applying the classical
and the Bayesian adjustment methods to some simulated datasets shows that the
new method is more robust and adaptive for solving the problems that are
caused by the irregular non-linear meter drift. The new adjustment method is
capable to recover the time-varying drift rate function of each gravimeter,
and also to optimize the weight constraints for each gravimeter that is used
in the gravity survey. We also carried out an error analysis for the inverted
gravity value at each station on based the marginal distribution. Finally, we
used this approach to process the real campaigned gravity data from an
observation network in North China. In this study, we rewrite the network
adjustment equations by introducing new trade-off parameters that balance the
residual of campaigned gravity data and the drift rate of the relative
gravimeter. This new method is tested with some synthetic datasets that are
been simulated with different drift models based on a real gravity
observation network. A comprehensive analysis on the fitting residuals and
the accuracy of adjustment is carried out. -3
u‰‰ŽÒF Dr. Bayona, Jose Antonio iGFZ German Research Center for
Geosciences E“Á•ÊŒ¤‹†ˆõj “ú@ŽžF@2019”N 8ŒŽ
28“ú(…) 15F30
-16F15 ê@ŠF@“Œv”—Œ¤‹†Š@D313/314ƒZƒ~ƒi[Žº5 TitleF@An updated global hybrid earthquake
model obtained from the optimal combination of interseismic strain rates and
smoothed-seismicity data AbstractF@ The
construction of global seismicity forecasts gives promise of definitive prospective
test results to be obtained in only a decade. Hence, there have been several
efforts
to generate global earthquake-rate models based on interseismic strain rates
and earthquake-catalog data, which currently provide high-resolution global
coverage. The Global Earthquake Activity Rate (GEAR1) seismicity model, for
instance, optimally combines crustal deformation rates with
smoothed-seismicity information to forecast long-term rates of earthquake
production worldwide. The
total earthquake number, spatial, and magnitude distributions forecasted by
GEAR1 are all consistent with observed seismicity, according to 2-yr
prospective test results. Nonetheless, inconsistencies in spatial seismicity
between the Seismic Hazard Inferred From Tectonics (SHIFT_GSRM2f) earthquake
forecast, the tectonic parent component of GEAR1, and the observations are
also found during the evaluation period. These discrepancies primarily stem
from SHIFT_GSRM2f underestimations of subduction-zone earthquake activity. The
Subduction Megathrust Earthquake Rate Forecast (SMERF) earthquake model was
accordingly designed to improve SHIFT_GSRM2f estimates of shallow interface
seismicity. SMERF is based on the use of regional seismicity parameters and
the conservation of moment principle. Therefore, the physics-based and
data-driven approach of SMERF is desired to upgrade the tectonic parent
component of GEAR1. In
this study, we integrate SMERF earthquake rates in subduction zones with
SHIFT_GSRM2f estimates everywhere else on Earth to generate a new global
geodetic-based earthquake model, referred to as the Tectonic Earthquake
Activity Model (TEAM) seismicity forecast. We detect significant spatial
variations of earthquake activity between SHIFT_GSRM2f and TEAM in all
subduction zones. Particularly, we identify the major differences
within subduction interfaces like Bougainville, Southern Kuril and Western
Alaska. We
moreover combine TEAM with the Kagan–Jackson smoothed seismicity (KJSS)
model, the earthquake parent component of GEAR1, to create an updated hybrid
seismicity model named GEAR2. We currently explore the optimal combination of
geodetic strain rates and earthquake- catalog data needed to better
characterize spatial earthquake patterns worldwide. Finally, we will submit
the earthquake-rate forecasts to the Collaboratory for the Study of
Earthquake Predictability (CSEP) testing center for independent retrospective,
pseudo-prospective and prospective evaluation. ‘æ74‰ñ u‰‰ŽÒF Dr. Chen, Feng iSchool
of Mathematics and Statistics, University of New South Wales, AustraliaESenior Lecturer(y‹³Žö‘Š“–)j “ú@ŽžF@2019”N5ŒŽ21“ú(‰Î)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Direct Likelihood Evaluation for the
Renewal Hawkes Process AbstractF@ An
interesting extension of the widely applied Hawkes self-exiting point
process, the renewal Hawkes (RHawkes) process, was recently proposed by
Wheatley et al. (2016 CSDA), which has the potential to significantly widen
the application domains of the self-exciting point processes. However, the
authors claimed that computation of the likelihood of the RHawkes process
requires exponential time and therefore is practically impossible. They
proposed two Expectation-Maximization (EM) type algorithms to compute the
maximum likelihood estimator (MLE) of the model parameters. Because of the
fundamental role of likelihood in statistical inference, a practically
feasible method for likelihood evaluation is highly desirable. In this talk
we present an algorithm that evaluates the likelihood of the RHawkes process
in quadratic time, a drastic improvement from the exponential time claimed by
Wheatley et al. We demonstrate the superior performance of the resulting MLEs
of the model relative to the EM estimators through simulations. We also
present a computationally efficient procedure to calculate the Rosenblatt
residuals of the process for goodness-of-fit assessment, and a simple yet
efficient procedure for future event prediction. The proposed methodologies
were applied on real data from seismology and finance. This talk is based on
joint work with Tom Stindl. The R package implementing the proposed
methodology is available on the CRANF httpsF//cran.r-project.org/web/packages/RHawkes/.
‘æ73‰ñ
u‰‰ŽÒF Prof. Wang, Baoshan i–¼ŒÃ‰®‘åŠw‘åŠw‰@ŠÂ‹«ŠwŒ¤‹†‰È•‘® ’nk‰ÎŽRŒ¤‹†ƒZƒ“ƒ^[EŠO—ˆŒ¤‹†ˆõ (’†‘ ‰ÈŠw‹Zp‘åŠwE‹³Žö)j “ú@ŽžF@2019”N4ŒŽ23“ú(‰Î)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Migration of micro-earthquakes during
cyclic operation of Underground Gas Storage and the Changdao earthquake swarm
AbstractF@ The
distribution of earthquakes is controlled by the stress state and material
properties. And the migration of earthquakes can be used to infer the changes
in subsurface stress or medium properties. In this presentation, we will
introduce two case studies of seismicity migrations related respectively to
the Hutubi Underground Gas Storage (UGS) in Junggar Basin and the Changdao
earthquake swarm occurred in the Bohai Sea. We first use the matched and
filter technique to detect more events than the local catalog. And then we
relocate the detected events using double difference method with waveform
cross-correlation-based differential travel-times. Micro-earthquakes clearly
migrate outward from the UGS during the cyclic operation, the migration may
result from the stress transfer during multiple injection and extraction. The
seismicity during Changdao earthquake swarm unilaterally migrated south-west
accompanied by some bursts along several conjugate faults. We suggest that
fluid diffusions are responsible for the earthquake migration in Changdao. ‘æ72‰ñ -1 u‰‰ŽÒF Prof. ’ ‹Å”ñ(Chen, Xiaofei) i’†‘ “ì•û‰È‹Z‘åŠw
’n‹…E‰F’ˆ‰ÈŠwŒn E‹³Žöj
“ú@ŽžF@2019”N 1ŒŽ
22“ú(‰Î) 13F30
-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@Phase diagram of earthquakes and
implications -2
u‰‰ŽÒF “íé ˆê‰Ã iɪŒ§—§‘åŠw ƒOƒ[ƒoƒ‹’nˆæƒZƒ“ƒ^[ E“Á”Cy‹³Žöj
“ú@ŽžF@2019”N 1ŒŽ
22“ú(‰Î) 14F30
-15F30 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@An investigation into the relation
between the occurrence of large earthquakes and time-dependent decrease in b
value AbstractF@ The
Gutenberg-Richter frequency-magnitude distribution of earthquakes is well
established in seismology. The b value, the slope of the relation between
frequency and magnitude is typically 1, but it often shows variations around
1. The b value has shown a pronounced decrease over several years prior to
large earthquakes around their hypocenters. Specific examples include the
M9-class 2011 Tohoku and 2004 Sumatra earthquakes (e.g., Nanjo et al., 2012).
However, it has remained uncertain whether there is the existence of tendency
that large earthquakes occur, following the appearance of b-value decrease.
To prove this existence, we are now trying to create a method to make and
evaluate trial retrospective forecasts of large earthquakes (e.g., M8+
earthquakes from 1980 to 2017 on the worldwide basis, using the ANSS
catalog), based on decreasing trend in b values. This is still ongoing
research, so that, in this talk, we present the preliminary result. Based on it, we then discuss the
possibility that a decrease in b values can be considered as a precursor to
large earthquakes and an important indicator that has potential in terms of
forecasting large earthquakes. -3
u‰‰ŽÒF Dr. Wang, Yuchen i“Œ‹ž‘åŠw ’nkŒ¤‹†Š
E“Á•ÊŒ¤‹†ˆõj “ú@ŽžF@2019”N 1ŒŽ
22“ú(‰Î) 15F30
-16F30 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@Tsunami Data Assimilation in Disaster
Mitigation AbstractF@ Tsunami
data assimilation has been proposed for tsunami early warning. It estimates
the tsunami waveform by assimilating offshore observed data into a numerical
simulation, without calculating initial sea surface height at the source. The
optimum interpolation method is adopted in data assimilation. However,
previous data assimilation method has a relatively high computational load,
as it is necessary to run numerical simulations to obtain the tsunami
wavefield. In
our research, we proposed a new tsunami data assimilation approach based on
Greenfs function to reduce the computation time for tsunami early warning.
Greenfs Function-based Tsunami Data Assimilation (GFTDA) forecasts the
waveforms at Points of Interest (PoIs) by superposition of Greenfs functions
between observation stations and PoIs. Unlike the previous assimilation
approach, GFTDA does not require the calculation of the tsunami wavefield for
the whole region during the assimilation process, because the Greenfs
functions have been calculated in advance. The forecasted waveforms can be
calculated by a simple matrix manipulation. This
approach greatly reduces the time cost for tsunami warning because it no
longer needs to run the tsunami propagation model, as long as the Greenfs
functions are calculated in advance. By combining with Huygens-Fresnel
Principle, this method could be applied to regions without a dense
observation network. The applications to the 2012 Haida Gwaii earthquake, the
2004 off the Kii Peninsula earthquake and the 2009 Dusky Sound earthquake
revealed that GFTDA helped achieve a more accurate and quicker tsunami early
warning while saving the cost. ‘æ71‰ñ
u‰‰ŽÒF Dr. Harte, David iGNS
Science, New Zealand EStatistical
Seismologist and Hazard Modeller(ãÈŒ¤‹†ˆõ)j “ú@ŽžF@2018”N 11ŒŽ
6“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313ƒZƒ~ƒi[Žº5 TitleF@Evaluation of Earthquake Stochastic
Models Based on Their Real-Time ForecastsF
A Case Study of Kaikoura 2016 AbstractF@ The
M7.8 Kaikoura NZ earthquake started at 2016-11-13 11F02F56
(UTC) with epicentre (173.02 deg E, 42.69 deg S), 15km NE of Culverden, and
lasted for about two minutes. It caused multiple fault ruptures to the north
as far as Seddon (150km from epicentre), the location of a large sequence in
2013. Since the mainshock, the bulk of the aftershock activity has also
migrated to the north. We
analyse real-time probability forecasts produced during the Kaikoura 2016
aftershock sequence, based on a spatial ETAS model. Forecasts were derived by
simulating the model forward over the required time interval multiple times.
Each forecast was evaluated at the end of the forecast time interval by
comparing with the number of events that eventually occurred. Further, the
spatial and temporal forecast characteristics were evaluated by comparing the
actual log-likelihood with those of the simulations. We
show that the model was forecasting too fewer aftershocks immediately after
the mainshock, and too many aftershocks in the later stages of the sequence.
The too fewer aftershocks is probably caused by many missing smaller events
early in the sequence and an initial large under-estimate of the mainshock
magnitude, being 6.6 with a final solution of 7.8 three days later. Various
catalogue, model and methodological problems become evident during such a
real-time experiment and these are also discussed. ‘æ70‰ñ u‰‰ŽÒF Dr. ’ Î(Chen, Shi) i’†‘’nk‹Ç ’n‹…•¨—Œ¤‹†Š
Ey‹³Žöj “ú@ŽžF@2018”N 8ŒŽ28“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@A new approach for terrestrial relative
gravity adjustment using smoothness priors of drift rate AbstractF@ The
relative gravimeter, which generally uses zero-length springs as the gravity
senor, is still as the first choice in the field of terrestrial gravity
measurement because of its efficiency and low-cost. Because the drift rate of
instrument can be changed with the time and meter, it is necessary for
estimating the drift rate to back to the base or known gravity value stations
for repeated measurement at regular hourfs interval during the practical
survey. However, the campaigned gravity survey for the large-scale region,
which the distance of stations is far away from serval or tens kilometers,
the frequent back to close measurement will highly reduce the gravity survey
efficiency and extremely time-consuming. In this study, we proposed a new
gravity data adjustment method for estimating the meter drift by means of
Bayesian statistical interference. In our approach, we assumed the change of
drift rate is a smooth function depend on the time-lapse. The trade-off
parameters were be used to control the fitting residuals. We employed the
Akaikefs Bayesian Information Criterion (ABIC) for the estimated these
trade-off parameters. The comparison and analysis of simulated data between
the classical and Bayesian adjustment show that our method is robust and has
self-adaptive ability for facing to the unregularly non-linear meter drift.
At last, we used this novel approach to process the realistic campaigned
gravity data at the North China. Our adjustment method is suitable to recover
the time-varied drift rate function of each meter, and also to detect the
meter abnormal drift during the gravity survey. We also defined an
alternative error estimation for the inversed gravity value at each station
on the basis of the marginal distribution theory. ‘æ69‰ñ u‰‰ŽÒF Dr. Varini, Elisa iInstitute
of Applied Mathematics and Information Technology, National Research
Council(CNR-IMATI), Italy E “ú@ŽžF@2018”N3ŒŽ
20“ú(‰Î) 13F30
-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW TitleF@Identification of earthquake clusters
in Northeastern Italy by different approaches AbstractF@ Earthquakes
do not occur randomly in space and time; rather, they tend to group into
clusters that can be classified according to their different properties,
presumably related to the specific geophysical properties of a seismic
region. Thus, we aim at exploring the spatio-temporal features of earthquake
clusters in North- eastern Italy, based on a systematic analysis of robustly
and uniformly detected seismic clusters reported in the local bulletins,
compiled at the National Institute of Oceanography and Experimental
Geophysics since 1977. First, data are analysed by a method for detection of
earthquake clusters, based on gnearest-neighbor dis- tancesh between events
in space-time-energy domain (Baiesi and Paczuski, 2004). Then they are
analysed by applying a stochastic declustering algorithm based on ETAS model
(Zhuang, Ogata, and Vere-Jones, 2002), in which events are associ- ated to
clusters in accordance with their estimated probability distributions. Both
methods allow for a robust data-driven identification of seismic clusters,
and permit to disclose possible complex features in the internal structure of
the identified clus- ters. By comparing these approaches, we take advantage
of a different description of the clustering process in order to assess consistency
and reliability of the findings. We found some evidence that swarm-like
sequences are mostly associated with the north-western part of the study
region, while burst-like sequences tend to occur in the south-eastern part of
it. Key
wordsF
earthquake clustering, nearest-neighbor distance, stochastic decluster- ing,
ETAS model. ‘æ68‰ñ
u‰‰ŽÒF Prof. ”n š –P (Ma, Kuo-Fong) i‘ä˜p‘—§’†‰›‘åŠw ’n‹…‰ÈŠwŒn
E‹³Žöj “ú@ŽžF@2018”N 1ŒŽ31“ú(‰Î)
13F30 -14F30 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Probability on Seismic Hazard
Assessment of TaiwanF Progress
and Challenge AbstractF@ Taiwan
Earthquake Model published the first public PSHA map of Taiwan in late 2015,
and had been widely discussed and adopted in a way toward seismic hazard
mitigation and risk assessment. The model adopts the source parameters of 38
seismogenic structures under a single fault segment basis, and shallow areal
source for crustal events, and, intraplate, and interplate subduction events.
To evaluate the potential ground-shaking resulting from each seismic source,
the corresponding ground-motion prediction equations for crustal and
subduction earthquakes are adopted. The highest hazard probability is
evaluated to be in Southwestern Taiwan and the Longitudinal Valley of Eastern
Taiwan. Right after the publication of PSHA2015, a damaging earthquake of
2016 Meinong M6.6 earthquake occurred in southwestern Taiwan from
non-identified seismogenic structure. Historically, significant crustal
damaging earthquakes in Taiwan mostly were from complicated fault system
rather than from a single fault segment (e.g. 1935 M7.5 Hsinchu-Taichung, and
1906 M7.1 Meishan earthquakes). Technically, the 2016 M6.6 Meinong earthquake
could be categorized into areal source event. The 1906 M7.1 Meishan
earthquake, recently, had been resolved to be from a fault system of blind NE
strike thrust with EW surface breaching fault (one of the identified
seismogenic structures). These events suggest that a single fault segment
evaluation for seismic hazard might be inadequate. Despite the difficulty in
giving slip rate of a single segment into the probability calculation, how to
deal with the slip rate in probability from complex fault system is a
challenge. In the same time, PSHA evaluation of ground motion from areal
source and active fault might double count the hazard for an event involved
from the both category. How to determine the maximum magnitude events from
areal source, and the delineation of the involvement of the areal source
event to complex fault system brought another attention on the source
categorization and its partition in probability for seismic hazard
assessment. ‘æ67‰ñ u‰‰ŽÒF Dr. Wu, Stephen i“Œv”—Œ¤‹†Š
ƒ‚ƒfƒŠƒ“ƒOŒ¤‹†Œn E•‹³j
“ú@ŽžF@2017”N 10ŒŽ
3“ú(‰Î)
16F30 -17F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Review of earthquake early warning
from an engineering perspective AbstractF@ After
the concept of earthquake early warning (EEW) first appeared in the 1980s, we
now have officially working EEW systems around the world, such as, Japan,
Taiwan, Mexico, USA, Italy, and so on. The algorithms of EEW have evolved to
a large variety, including both on-site, regional and some hybrid methods.
The underlying seismic model ranges from simple point-source ground motion
prediction equations to sophisticated finite fault prediction models.
Recently, researchers have also proposed to develop real-time GPS based EEW
and purely data-driven seismic intensity prediction models. Besides the
scientific advances, engineering applications of EEW have became another
important research topic. In this talk, I will briefly go through all the
topics above in a practical implementation point of view, and highlight some
important challenge of EEW. ‘æ66‰ñ
-1 u‰‰ŽÒF Dr. 吴(Œà) »iWu,
Jingj i’†‘‰ÈŠw‰@’nŽ¿E’n‹…•¨—Œ¤‹†Š Ey‹³Žöj “ú@ŽžF@2017”N 8ŒŽ
29“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Seismicity and Seismic Anisotropy
beneath eastern Tibet AbstractF@ Eastern
Tibet is one of the most tectonically active areas in Chinese Mainland.
Songpan-Ganzi Block, Longmenshan Orogenic Belt, and Sichuan Basin are located
in this area from west to east. The uplifting mechanisms of eastern Tibet are
hot debated in recent years. In addition, a series of great earthquakes in
eastern Tibet (2008 Wenchuan Mw7.9, 2013 Lushan Mw6.6, and the most recent
2017 Jiuzhaigou Mw6.5) show the urgent need for accurate seismicity
detection, as we are still not clear how aftershocks evolve because of the
poor station coverage and overlapping of aftershocks. Here,
I would like to present our studies in eastern Tibet, including seismic
anisotropy and seismicity detection. Crustal anisotropy are inversed
according to shear-wave splitting of Pms phase from permanent station, and we
observed that tectonic escaping, crustal flow, and crustal shortening may
contribute to the tectonic evolution in various sub-areas in eastern Tibet.
We also concentrated on the seismicity detection of 2013 Lushan earthquakes,
and obtained details of the spatial and temporal aftershock evolution with
the help of matched filter technique, suggesting that afterslip is the
potential mechanism triggering Lushan aftershocks. In
order to understand more about eastern Tibet, we would keep on working in
this area by focusing on the SKS, SKKS, PKS (hereafter, XKS phase) splitting
and repeating earthquakes, which may reveal geodynamic processes in mantle
and fault slip rate respectively. -2
u‰‰ŽÒF Dr. Mak, Sum iGerman Research Centre for Geosciences
(GFZ-Potsdam), Germany EƒŠƒT[ƒ`EƒAƒVƒXƒ^ƒ“ƒg(RA)j “ú@ŽžF@2017”N 8ŒŽ
29“ú(‰Î) 17F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Empirical Validation of Seismic Hazard
Models AbstractF@ Seismic
hazard, for applications such as engineering structural design and insurance
loss estimation, is represented as a probabilistic forecast. The most common
form of seismic hazard representation is in the probability for a certain
level of ground motion exceedance. The hazard also varies spatially, forming
a hazard map. As
the amount of observation accumulates, recently there are more and more
attempts to statistically evaluate the performance of probabilistic seismic
hazard prediction using ground motion observations. This talk presents the
general theory of this type of studies, using the United States Geological
Survey National Seismic Hazard Maps as an example. ‘æ65‰ñ
u‰‰ŽÒF Prof. —« ³•F (Liu,Jann-Yenq) i‘ä˜p‘—§’†‰›‘åŠw ‘¾‹ó‰ÈŠwŒ¤‹†Š
E‹³Žöj “ú@ŽžF@2017”N 6ŒŽ
13“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Statistical Analyses on
seismo-ionospheric disturbances and precursors of the 11 March 2011 M9.0
Tohoku Earthquake AbstractF@ Ground-based
observations of the GPS TEC (total electron content) and satellite probing of
radio occultation (RO) of FORMOSAT-3/COSMIC (F3/C) are employed to study the
co-seismic disturbances and precursors of the 11 March 2011 M9.0 Tohoku
earthquake. It is for the first
time the tsunami origin observed.
The horizontal propagation of seismo-traveling ionospheric
disturbances (STIDs) induced by tsunami and seismic waves of the Tohoku
earthquake are observed by the GPS TEC, while the associated vertical
propagation is probed by multi ground-based observations and F3/C RO
sounding. The raytracing and
beamforming techniques are used to find the propagation and origin of the
STIDs triggered by the seismic and tsunami waves. Meanwhile, z test and the
Receiver Operating Characteristic (ROC) curve are employed to find the
characteristic of the temporal SIPs (seismo-ionospheric precursor) of the GIM
(global ionosphere map) TEC associated with earthquakes in Japan during
1998-2014. It is found that
anomalies appearing 3 days before the Tohoku earthquake well agree with the
characteristic, which suggests that the SIPs of the earthquake have been
observed. A global study on the
distribution of anomalies shows that the SIPs specifically and continuously
occur over the epicenter on 8 March 2011, 3 days prior to the Tohoku
earthquake. Finally, a physical
model of the ionosphere is used to reproduce the observed anomalies and find
possible causal of the Tohoku SIPs. ‘æ64‰ñ
-1 u‰‰ŽÒF Prof. Ó ’·ŸiJiang, Changshengj i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†Š EŒ¤‹†ˆõ(‹³Žö‘Š“–)j “ú@ŽžF@2017”N 3ŒŽ
29“ú(…) 13F30
-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Assessment of earthquake monitoring
capability and score of seismic station detection capability in China Seismic
Network (2008~2015) AbstractF@ In
order to scientifically assess the earthquake monitoring capability of China
Seismic Network (CSN), we investigated the seismic observation date of CSN
with total 1001 stations considered during the period from 2008/10/01 to
2015/09/17. The distribution of seismic detection probability (PE) and the
minimum magnitude of completeness (MP) were analyzed by using the method of
"Probability-based magnitude of completeness" (PMC). In addition to
mapping the seismic monitoring capability for entire CSN, we developed a new
method named gseismic monitoring capability scaleh, and defined the seismic
detection capability scale Dscore to analyze the statistical characters and
spatial distribution of the seismic detection capabilities for each national
and regional stations, which based on the amplitude contour curves.
Additionally, the method of setting the "best objective function"
of seismic detection capability was used to simulate the seismic monitoring
capability improvement of CSN obtained by improving the conditions of observation.
-2
u‰‰ŽÒF Prof. ’ ÎiChen, Shij i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†ŠEŒ¤‹†ˆõ(‹³Žö‘Š“–)j “ú@ŽžF@2017”N 3ŒŽ
29“ú(…) 14F30
-15F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Gravity changes before and after the
2015 Mw 7.8 Gorkha, Nepal and the 2008 Mw 7.9 Wenchuan, China earthquakes AbstractF@ Absolute
gravity measurements at four stations in southern Tibet show significant
gravity increase from 2011 to 2013, up to ~22 ƒÊGals at the Shigatse station.
Here we report new measurements at the Shigatse station conducted in 2016,
which show that the gravity increase ended after the 2015 Nepal Mw 7.8
earthquake. Similar gravity changes are measured at the Pixian absolute
gravimetry station near the epicenter of the 2008 Wenchuan Mw 7.9 earthquake,
where 17 absolute gravity measurements have been conducted since 2002,
including four pre-earthquake measurements that show ~30 ƒÊGals increase from
2002 to 2008. The trend of gravity increase ended after the Wenchuan
earthquake. We analyzed the gravity effects from ground vertical motions
using data from continuous GPS stations collocated with these absolute
gravimetry stations, and surficial and hydrological processes using local
hydrological data. We found that these effects are much smaller than the
observed gravity increase before the earthquakes, and suggest that the
pre-earthquake gravity increase may be caused by strain and mass (fluid)
transfer in broad seismic source regions. Further studies are needed to
validate such pre-earthquake gravity changes, which however are difficult to
be resolved from space-based gravity models. ‘æ63‰ñ
u‰‰ŽÒF Prof. Žü@Žd—E
(Zhou, Shiyong) i–k‹ž‘åŠw ’n‹…‹óŠÔ‰ÈŠwŠw‰@ E‹³Žöj “ú@ŽžF@2017”N 1ŒŽ
18“ú(…) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@Could the abnormal seismicity increase
triggered remotely by great earthquakes be used to judge the regional
earthquake risk? AbstractF@ We
study the possible dynamic triggering effect in Northern China, including
Tangshan area, when the Japan Tohoku M_w 9.0 earthquake happened at March
11th, 2011(In short, Japan Tohoku earthquake). We use Time-Space Epidemic
Type Aftershock Sequence Model (Time-Space ETAS model) as the seismicity
statistic model in this research, using Stochastic Declustering method and
Gauss Kernel function to get Time-Space background seismicity variation image
on the target area. Thus this research may find out whether the area with
large co-seismic displacement would have sudden abnormal seismicity increase.
As a result, the Japan Tohoku earthquake has little effect on the total and
background seismicity of Tangshan area, which means that the seismic
structure of Tangshan area is fundamentally stable. However, when we did
research on the possible dynamic triggering effect in Southwestern
China, we found that seismicity
on some place in Sichuan and Yunnan has sudden abnormally increased almost at
the same time when 2004 Sumatra M_w 9.2 earthquake (In short, 2004 Sumatra
earthquake) happened. That is the statistic phenomenon which shows the
existence of co-seismic dynamic triggering. This research helps to find out
the exact position of the high abnormal seismicity area in its time image.
Besides, this time image can also help to detect whether this high gabnormalh
seismicity in the picture is really abnormal or is triggered by certain large
earthquake or not. ‘æ62‰ñ
-1 u‰‰ŽÒF Dr. Helmstetter, Agnès iInstitut
des Sciences de la Terre, France EResearch
fellow(ãÈŒ¤‹†ˆõ)j
“ú@ŽžF@2016”N 10ŒŽ
26“ú(…) 15F00
-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Repeating icequakes AbstractF@ We
have detected repeating icequakes on three different sites F an
alpine glacier (Argentière, massif du Mont-Blanc, France), near the base of
the western margin of the Greenland Ice Sheet, and on a rock-glacier (Gugla,
Valais, Switzerland). Repeating icequakes are events with very similar
waveforms, located at the base of a glacier, with quasi-periodic recurrence
times of the order of minutes or hours, and progressive changes in magnitude.
The activity of each cluster is intermittent. Burst-like episodes can last
for a few hours or months, and then disappear. In greenland, temporal changes
of inter-event times and magnitudes are correlated with temperature, because
surface meltwater yields an increase in basal water pressure and in glacier
flow velocity. But each cluster reacts differently to temperature changes,
probably because the connectivity to the subglacial drainage system is
different for each asperity. In contrast, we observed no correlation between
temperature and repeating icequakes at Glacier d'Argentière and at Gugla rock
Glacier. However, we observed bursts of repeating icequakes at Gugla
triggered by snow falls. We suggest that the snow weight may have induced a
transition between aseismic slip and unstable stick-slip. In addition to
repeating basal icequakes, we also detected swarms of icequakes induced by
crevasse opening, probably promoted by melt-water flow. These swarms of
icequakes have very different statistical distributions in time, space and
magnitude compared with repeating icequakes. Their recurrence times are power
law distributed, their magnitudes obey the Gutenberg-Richter law, and the
size of each cluster is several tens of meters. These different patterns may
help to identify the triggering mechanisms of earthquake swarms, and to
discriminate between fluid flow and aseismic slip. -2
u‰‰ŽÒF Dr. Harte, David iGNS
Science, New Zealand EStatistical
Seismologist and Hazard Modeller(ãÈŒ¤‹†ˆõ)j “ú@ŽžF@2016”N 10ŒŽ
26“ú(…) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Determining the Uncertainty in
Earthquake Forecasts AbstractF@ Forecasts
based on a self-exciting model, like ETAS, are often produced by simulation.
From these simulations, an empirical probability distribution can be derived
for a forecast in a specified space-time-magnitude volume. We
will show that the forecast distribution can be characterised by probability
generating functions. This shows how deeply complex the dependency structure
is in such a model. While of theoretical interest, they remain intractable to
me in a practical sense. We
then consider whether the forecast distribution can be approximated, using
less computation than that required for simulation, by a "standard
" multi-parameter probability distribution. The multiple parameters
gives us the ability to at least fit a distribution with comparable mean and
variance to that of the forecast distribution. One of the main questions is
how to determine the forecast mean, and then given the mean, the variance. ‘æ61‰ñ
u‰‰ŽÒF Dr. Helmstetter, Agnès iInstitut
des Sciences de la Terre, France EResearch
fellow(ãÈŒ¤‹†ˆõ)j “ú@ŽžF@2016”N 10ŒŽ
11“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@Adaptive smoothing of seismicity in
time, space and magnitude for long-term and short-term earthquake forecasts AbstractF@ We
present new methods for long-term and short-term earthquake forecasting that
employ space, time, and magnitude kernels to smooth seismicity. These
forecasts are applied to Californian and Japan seismicity and compared with
other models. Our models are purely statistical and rely on very few
assumptions about seismicity. In particular, we do not use Omori-Utsu law.
The magnitude distribution is either assumed to follow the Gutenberg-Richter law or is
estimated non-parametrically with kernels. We employ adaptive kernels of
variable bandwidths to estimate seismicity in space, time, and magnitude
bins. For long-term forecasts, the long-term rate in each spatial cell is
defined as the median value of the temporal history of the smoothed
seismicity rate in this cell, circumventing the relatively subjective choice
of a declustering algorithm. For short-term forecasts, we simply assume
persistence, that is, a constant rate over short time windows. Our long-term
forecast performs slightly better than our previous forecast based on
spatially smoothing a declustered catalog. Our short-term forecasts are
compared with those of the epidemic-type aftershock sequence (ETAS) model.
Although our new methods are simpler and require fewer parameters than ETAS,
the obtained probability gains are surprisingly close. Nonetheless, ETAS
performs significantly better in most comparisons, and the kernel model with a
Gutenberg-Richter law attains larger gains than the kernel model that non-parametrically
estimates the magnitude distribution. Finally, we show that combining ETAS
and kernel model forecasts, by simply averaging the expected rate in each
bin, can provide greater predictive skill than ETAS or the kernel models can
achieve individually. ‘æ60‰ñ
u‰‰ŽÒF Dr. Hasih Pratiwi iSebelas
Maret University, Surakarta, Indonesia E
Lecturerj “ú@ŽžF@2016”N 8ŒŽ
30“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A504ƒZƒ~ƒi[Žº7 TitleF@ESTIMATING EARTHQUAKE RISK BY USING
EPIDEMIC TYPE AFTERSHOCK SEQUENCE MODEL APPROACH (Case Study in Java Island,
Indonesia) (Hasih Pratiwi and
Respatiwulan) AbstractF@ Physical
losses caused by earthquakes are death or casualties and damage to buildings
and areas. Therefore, efforts to reduce the risk of earthquake are very
necessary. Relating to risk or loss generated by earthquake it is of course
does not get out of insurance world. Insurance as nonbank financial
institution can give guarantee or protection as done by banking sector. This
research discusses a method to estimate earthquake risk by using epidemic
type aftershock sequence model. Calculation of earthquake risk can be
determined through a damage probability matrix. The information contained in
the damage probability matrix and in the damage ratios can be combined for
defining the mean damage ratio. Then, based on the estimation of intensity
function in epidemic type aftershock sequence model we can formulate the
expected annual damage ratio, and the existing method for calculating
earthquake risk is modified to obtain earthquake insurance premium rates. We
use earthquakes data in Java Island obtained from U.S. Geological Survey
which consists of time of occurrence, longitude, latitude, magnitude, depth,
and catalogue source. The time span of this research is from January 1, 1973,
to December 31, 2010. Zonation map of earthquake generated in this research
is different from the zonation map SNI 2010 issued by Indonesian Ministry of
Public Works. The difference lies on the distribution of earthquake zone,
especially in regencies and cities with high risk. The earthquake insurance
premium rates for high and medium intensities obtained from this research are
significantly greater than the premium rates issued by PT Reasuransi Maipark
Indonesia. The current premium rates are relatively small when compared with
the rates in Turkey and from this research. KeywordsF
earthquake insurance, intensity function, epidemic type aftershock sequence
model, damage probability matrix. ‘æ59‰ñ
u‰‰ŽÒF Prof. ’Â@‹Ê‰piChen,
Yuh-Ingj i‘ä˜p‘—§’†‰›‘åŠw, “ŒvŒ¤‹†ŠE“ÁãÙ‹³Žöj “ú@ŽžF@2016”N 7ŒŽ
19“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Statistical evaluation of short-term
hazard of earthquakes after 1999 M 7.3 Chi-Chi shock in Taiwan AbstractF@ The
temporal-spatial hazard of the earthquakes in a continental region of Taiwan
after the 1999 September 21 MW =7.7 Chi-Chi shock is investigated. The
Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989) that combines the
frequency-magnitude distribution (Gutenberg and Richter, 1944) and
time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed
for assessing the earthquake hazard after a large shock (Wiemer, 2000).
However, it is found that the b values in the frequency-magnitude distribution
of the earthquakes in the studyregion dramatically decreased from background
values after the Chi-Chi shock, and then gradually increased up. The
observation of a time-dependent distribution of magnitude motivated us to
propose a modified RJ model (MRJ) to assess the earthquake hazard (Chen et
al. 2015). To incorporate the possible impact of previous large earthquakes
on thefollowing ones, a simplified epidemic-type aftershock sequence (ETAS)
model (Ogata, 1988, Ogata and Zhunag, 2006) is further considered. A modified
ETAS (METAS) model that combines the simplified ETAS model and the
time-dependent distribution of magnitude is then suggested for the hazard
evaluation. The MRJ and METAS models are further separately used to make
one-day forecast of large earthquakes in the study region. To depict the
potential rupture area for future large earthquakes, we also develop the
space-time MRJ and METAS models and construct the corresponding relative
hazard (RH) maps. The Receiver Operating Characteristics (ROC) curves (Swets,
1988) demonstrate that the RH map based on the MRJ model is as good as the
one based on the METAS model for exploring the spatial hazard of earthquakes
in a short time after the Chi-Chi shock. ‘æ58‰ñ u‰‰ŽÒF Dr. Zhuang, Jiancang
(¯ Œš‘q) i“Œv”—Œ¤‹†Š
ƒ‚ƒfƒŠƒ“ƒOŒ¤‹†ŒnEy‹³Žöj “ú@ŽžF@2016”N 6ŒŽ
29“ú(…) 16F00
-16F40 ê@ŠF@“Œv”—Œ¤‹†Š@D313ED314ƒZƒ~ƒi[Žº5
@“Œv”—Œ¤‹†ŠE“Œv”—ƒZƒ~ƒi[•½¬‚Q‚W”N“x (2016”N“x) TitleF@Œ‡‘ª‚Ì‚ ‚éƒ}[ƒN•t‚«“_‰ß’öŽžŒn—ñƒf[ƒ^‚Ì•â[–@(Replenishing
missing data in the observation record of mark point processes) AbstractF@ This
presentation illustrates a fast approach for replenishing missing data in the
record of a temporal point process with time independent marks. The basis of
this method is that, if such a point process is completely observed, it can
be transformed into a homogeneous Poisson process by using a biscale
empirical transformation. This approach includes three key stepsF (1)
Obtain the transformed process by using the empirical transformation and find
a time-mark range that likely contains missing events; (2) Estimate a new
empirical distribution function based on the data in the time-mark range
inside which the events are supposed to be completely observed; (3) Generate
events in the missing region. This method is tested on a synthetic dataset
and applied to the data missing problem in the JMA record of the Kumamoto
aftershock sequence, occurring from 2016-4-15 in Japan. The influence of
missing data on the MLE of the ETAS parameters is studied by comparing the
analysis results on the original and replenished datasets. The results show
that the MLEs of the ETAS parameters vary when the ETAS model is fitted to
the recorded catalog with different cut-off magnitudes, while when the
replenished dataset is used the MLE of the ETAS parameters keep stable. ‘æ57‰ñ u‰‰ŽÒF Dr. Shcherbakov, Robert iDepartment
of Earth Sciences, Western University(Ontario), Canada E Associate Professor “ú@ŽžF@2016”N 6ŒŽ
22“ú(…) 16F40
-17F20 ê@ŠF@“Œv”—Œ¤‹†Š@D313ED314ƒZƒ~ƒi[Žº5
@“Œv”—Œ¤‹†ŠE“Œv”—ƒZƒ~ƒi[•½¬‚Q‚W”N“x (2016”N“x) TitleF@Statistics and Physics of Aftershocks AbstractF@ Aftershocks
are ubiquitous in nature. They are the manifestation of relaxation phenomena
observed in various physical systems. In the studies of seismicity,
aftershock sequences are observed after moderate to large main shocks.
Empirical observations reveal that aftershocks obey power-law scaling with
respect to their energies (seismic moments) which in magnitude domain can be
modelled by the Gutenberg-Richter law. The decay rate of aftershocks above a
certain magnitude is typically inversely proportional to the time since the
main shock and is approximated by the modified Omori law. The largest
aftershocks in a sequence constitute significant hazard and can inflict
additional damage to infrastructure that is already affected by the main
shock. Therefore, the estimation of the magnitude of a possible largest
aftershock in a sequence is of high importance. In this presentation, a
Bayesian predictive distribution and the corresponding confidence intervals
for the magnitude of the largest expected aftershock in a sequence are
derived using the framework of Bayesian analysis and extreme value
statistics. The analysis is applied to several well-known aftershock
sequences world-wide to construct retrospectively the confidence intervals
for the magnitude of the subsequent largest aftershock by using the
statistics of early aftershocks in the sequences. In order to infer the
physical mechanisms of triggering and time delays responsible for the
occurrence of aftershocks, a nonlinear viscoelastic slider-block model is
considered. It is shown that nonlinear viscoelasticity plays a critical role
in the triggering of aftershocks. The model reproduces several empirical laws
describing the statistics of aftershocks, which are observed in the studies
of systems with relaxation dynamics, specifically, for earthquakes. ‘æ56‰ñ u‰‰ŽÒF Dr. Strader, Anne iGFZ
German Research Centre for Geosciences, Germany E “Á•ÊŒ¤‹†ˆõj “ú@ŽžF@2016”N 6ŒŽ
7“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Evaluation of Current CSEP Testing
MethodsF Case
Studies for Japan and California AbstractF@ The
Collaboratory for the Study of Earthquake Predictability (CSEP) was developed
to rigorously test earthquake forecasts retrospectively and prospectively
through reproducible, completely transparent experiments within a controlled
environment (Zechar et al., 2010).
Forecasts are individually evaluated using a set of likelihood-based
consistency tests, which measure the consistency between the number, spatial
and magnitude distribution of the observed and forecasted seismicity during
the testing period (Schorlemmer et al., 2007; Zechar et al., 2010).
Additionally, the classical paired t-test and non-parametric w-test are used
to directly compare two forecasts' performances at target earthquake
locations. These tests rely on a hypothesis testing framework, resulting in a
final decision (to reject or not reject a forecast), rather than quantifying
the model's lack-of-fit or localized performance. Residual methods are
employed by the CSEP to discern spatial variation in model performance
compared to the observed seismicity distribution and other models, but are
not currently incorporated into decision-making processes. To illustrate what
can be learned from commonly utilized current CSEP tests, we present two case
studies. The first is a retrospective evaluation of a rate-and-state forecast
for the Japan CSEP testing classes, where spatiotemporal seismicity rate
fluctuations are inverted for Coulomb stress changes. Although the model
underestimates the number of earthquakes following the M9.0 Tohoku mainshock,
it displays positive information gain over baseline ETAS seismicity rates
(Ogata, 2011) within the rupture region. The second forecasting experiment is
a continued prospective evaluation of the time-independent California
earthquake forecasts tested in the Regional Earthquake Likelihood Model
(RELM) experiment, from 2011-2016. Additionally, we test two models developed
by the United States Geological Survey (USGS)F the
time-dependent Uniform California Earthquake Rupture Forecast (UCERF2) and
time-independent National Seismic Hazard Mapping Project (NSHMP) models. To reduce bias from expert-based
decision making utilized in current testing methods, we introduce the
framework of a Dynamic Risk Quantification (DRQ) platform, that will be
developed to combine and optimize ensemble forecasts and hazard models using
a data-driven approach, and updated as new data become available. ‘æ55‰ñ
-1 u‰‰ŽÒF Dr. Šs ˆê‘ºiGuo, Yicunj i–k‹ž‘åŠw
’n‹…‰F’ˆ‰ÈŠwŒ¤‹†‰È E
”ŽŽmŒãŠú‰Û’öj “ú@ŽžF@2016”N 3ŒŽ
22“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Iterative finiteETAS model and some
results of the histETAS model of the North China Craton AbstractF@ We
introduce a iterative algorithm to refine the finite sources of main shocks
in the finite ETAS model, in which the weight of triggering ability for each
subfault is its productivity divided by the whole productivity of the main
shock. Also we apply histETAS model to North China Craton. It turns out that
the b value and background seismicity patterns coincide with the static
coulomb stress change induced by historical big earthquakes, and p value
variation in space is in agreement with velocity structure of the lithosphere
under major fault zones. Therefore we infer the statistical characteristics
of seismicity reflect the properties of medium to some extent, and make some
discussion of future earthquake hazard. -2
u‰‰ŽÒF ”öŒ` —Ç•F i“Œv”—Œ¤‹†Š E
–¼—_‹³Žö^“Œ‹ž‘åŠw’nkŒ¤‹†Š
’nk‰ÎŽRî•ñƒZƒ“ƒ^[ E“Á”CŒ¤‹†ˆõj “ú@ŽžF@2016”N 3ŒŽ
22“ú(‰Î) 17F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@3D spatial models for seismicity
beneath Kanto region AbstractF@ Development
of point-process models for the seismicity in 3D space (longitude, latitude
and depth) beneath Kanto area down to 100km depth is more required than for
seismicity in the rest of the world. This is because the three tectonic
plates meet beneath Kanto plain; and interactions among the interplate and
intraplate earthquakes are too complex to make detailed analysis and
forecasts in 2D space that ignores the depths. We
consider the 3D hierarchical space-time ETAS (epidemic-type aftershock
sequence) model. Among the characterizing parameters, the background
seismicity rate \mu and aftershock productivity K are highly sensitive to the
locations, so that these parameters should be location-dependent.
Furthermore, the impact of the 2011 Tohoku-Oki earthquake of M9.0 to the
seismicity beneath the Kanto region has been so large that we need a
space-time function for representing the amount of the induced seismicity
beneath Kanto by this giant earthquake. Specifically, we adopt the Omori-Utsu
function as the effect of induced earthquakes, started after the occurrence
time of the Tohoku-Oki earthquake, where we assume that the aftershock
productivity parameter KM9 of the Omori-Utsu function is also
location-dependent. For forecasting future large earthquakes, we further need
to estimate the location-dependent b-value of the Gutenberg-Richter law. The
spatial variations of the characteristic parameters \mu(x,y,z), K(x,y,z) ,
KM9(x,y,z) and b(x,y,z) of our model are inverted to visualize the regional
changes of the seismic activity. For this objective, we make 3D Delaunay
tessellation of the Kanto volume, where every earthquake belongs to vertices
of a tetrahedron. Each of the above mentioned parameter function is a
3-dimensional piecewise linear function defined by the values at the four
Delaunay tetrahedral vertices. The
estimates of the focal parameter functions are obtained by an optimal
trade-off between the goodness of fit to the earthquake data and the
smoothness constraints (or roughness penalties) of the variations of
parameter values. Strengths of the constraints of or the penalties to
respective parameter functions can be simultaneously adjusted from the data
by means of an empirical Bayesian method using the Akaikefs Bayesian
information criterion (ABIC). Key
wordsF
ABIC, aftershock productivity, background seismicity rate, b-values, Delaunay
function, Delaunay tessellation, empirical Bayesian method, Omori-Utsu
function for induced seismicity, penalized log-likelihood. ‘æ54‰ñ
u‰‰ŽÒF Dr. ‰¤ 婷iWang, Tingj iDepartment
of Mathematics and Statistics, University of Otago, New Zealand ELecturerj
“ú@ŽžF@2016”N 2ŒŽ
9“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Identification of seismic phases using
Markov-modulated marked Hawkes processes AbstractF@ Based
on a temporal Markov-modulated Hawkes process that we developed earlier to
investigate long-term patterns of seismic activity with multiple mainshocks,
we made extensions to this temporal model to include spatial variation of the
seismic activity and the earthquake magnitudes. Our aim is to categorize
spatiotemporal seismic hazards holistically, using the entire earthquake
record in a selected region to identify patterns correlated with subsequent
large earthquakes, rather than the traditional way of selecting individual
foreshock-mainshock or mainshock-aftershock sequences. I will use several
case studies to illustrate how this model works and discuss about the
problems that we had with the model fitting. ‘æ53‰ñ
-1 u‰‰ŽÒF Dr. ›š 凤—æiYin, Fenglingj i’†‘’nk‹Ç
’n‹…•¨—Œ¤‹†ŠE•—Œ¤‹†ˆõ(•‹³‘Š“–)j “ú@ŽžF@2016”N 1ŒŽ
27“ú(…) 13F30
-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Coulomb stress evolution along the
middle segment of Redriver fault zone over the past 180 Years due to
coseismic, postseismic and interseismic deformation iYin,Fengling, Jiang, Changsheng and
Han, Liboj AbstractF@ The
Redriver fault zone, for it being as the boundary of Sichuan-Yunnan rhombic
block and southeastern margin of the Tibetan plateau, and near the Central
Yunnan city group, its seismic activity deserves attention. The Redriver
fault zone within Yunnan has experienced at least 9 earthquakes of M≥6 in
recent 180 years. Using stratified viscoelastic lithospheric model, we
calculate the coulomb failure stress evolution along the redriver fault zone
over the past 180 years due to coseismic, postseismic and interseismic
deformation. By analyzing 25 earthquakes occurred along the Redriver fault
zone and ajacent faults, we find that the middle segment of Redriver fault
zone remains low seismic activity in recent two hundred years. This is
consistent with the observed eseismic gapf as earthquake catalog shows.
Assuming there is no earthquake within about 30 years around the Redriver
fault zone, this fseismic gapf may remain due to postseismic and interseismic
deformation. -2
u‰‰ŽÒF Dr. Taroni, Matteo iIstituto
Nazionale di Geofisica e Vulcanologia, Rome, Italy E Post-doctoral fellowj “ú@ŽžF@2016”N 1ŒŽ
27“ú(…) 14F30
-15F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Some recent techniques to improve
earthquake forecasting (Taroni, Matteo, Marzocchi, Warner, Zechar, Jeremy and
Werner, Maximilian) AbstractF@ In
this presentation I will show some recent results regarding the earthquake
forecasting techniques. In particular I will showF i)
How to consider aftershocks and foreshocks in the seismic hazard computation,
with an application to the Italian case. ii)
How to merge different catalogues to obtain a better estimation of the
Tapered Gutenberg-Richter distribution parameters, with an application to the
global and Italian case. iii)
How to create an ensemble model to improve the performance of the short-term
earthquake forecasting models, with an application to the New Zealand case. ‘æ52‰ñ u‰‰ŽÒF Dr. Guillas, Serge iDepartment
of Statistical Science, University College London, U.K. E Reader^“Œ‹ž‘åŠw’nkŒ¤‹†ŠE“Á”Cy‹³Žöj “ú@ŽžF@2015”N 11ŒŽ
17“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Dimension reduction for the
quantification of uncertainties in tsunami and climate models AbstractF@ VOLNA,
a nonlinear shallow water equations solver, produces high resolution
simulations of earthquake-generated tsunamis. However, the uncertainties in
the bathymetry (from irregularly-spaced observations) have an impact on
tsunami waves. We first employ a stochastic partial differential equation
(SPDE) approach to quantify uncertainties in these boundary fields. These
uncertainties are then parametrized to be used as inputs of an emulator of
VOLNA. However, the dimension of these boundary fields is large and must be
reduced. We apply the gradient-based kernel dimension reduction approach
(gKDR) by Fukumizu and Leng (2014) and construct an Gaussian Process emulator
on this reduced input space. We propagate uncertainties in the bathymetry to
obtain an improved probabilistic assessment of tsunami hazard. In a
separate climate application, we employ the Bayesian calibration of complex
computer models using Gaussian Processes, introduced by Kennedy and O'Hagan
(2001), that has proven to be effective in a wide range of applications.
However, the size of the outputs, such as climate models' spherical outputs,
leads to computational challenges in implementing this framework. Covariance
models for data distributed on the sphere also present additional challenges
compared to covariance models for data distributed over an Euclidean space.
To overcome these various challenges, we make use of the spherical harmonics
(SHs) decomposition of the computer model output, and then apply a Gaussian
process assumption to the coefficients in the decomposition. Furthermore, using
the SPDE approach, we can capture non-stationarity in the spatial process.
Hence, we generalize further the spherical correlation framework by expanding
the SPDE parameters used to quantify the nonstationary behavior in the
functional space spanned by the SHs. We illustrate our findings on several
synthetic examples. In particular, our method can outperform the calibration
based on principal components. Finally we show that our technique has the
potential to calibrate the Whole Atmosphere Community Climate Model (WACCM). ‘æ51‰ñ
-1 u‰‰ŽÒF Dr. Gerstenberger, Matthew iGNS Science, New Zealand ERisk and Engineering Team Leader,
Senior Seismologistj “ú@ŽžF@2015”N 9ŒŽ
1“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@The New Zealand National Seismic
Hazard ModelF
Rethinking PSHA AbstractF@ We
are currently revising the New Zealand National Seismic Hazard Model. In this
revision we are exploring some of the fundamental assumptions of the model
and investigating how uncertainties in earthquake source and ground motion
estimation propagate through to the end uses of the model. Uncertainties
related to the source modelling that come from a paucity of data and from
different methods that can be used to model the seismic sources are currently
not fully quantified in the way we model seismic hazard. Additionally,
seismic sources are generally assumed to be a stationary Poisson process and
earthquake clustering is ignored. Including these uncertainties in the way
risk is modelled based on the outputs of the National Seismic Hazard Model
will likely lead to more robust estimates of risk for use by industry and in
the development of design standards. NoticeF This
email and any attachments are confidential. If received in error please
destroy and immediately notify us. Do not copy or disclose the contents. -2
u‰‰ŽÒF Dr. ’ ÎiChen, Shij i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†ŠEy‹³Žöj “ú@ŽžF@2015”N 9ŒŽ
1“ú(‰Î)
17F00 -18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@A study on the regional gravity
changes before large earthquakes from the statistical perspectives AbstractF@ The repeated
gravity surveys, also called mobile gravity measurements, have been carried
out for decades in the Chinese mainland. Significant gravity changes have
been detected before some cases of great earthquakes, such as the 1976
Tangshan Ms7.8 Earthquake, 2008 Wenchuan Ms8.0 earthquake, etc. The main aim
of the repeated gravity surveys is to monitor the geophysical field
variations in some major seismic hazard zones. By this sort of in-situ
gravimetric network, the yearly changes of regional gravity can be obtained.
Through the Molchan Error Diagram tests, we found that observed gravity
changes are statistically correlated to the occurrence of future large
earthquakes, i.e., the gravity changes are more powerful than a seismicity
rate model in forecasting large earthquakes. These results imply that gravity
changes before earthquake include precursory information of future large
earthquakes. Key
wordsF
Gravity changes, Earthquake prediction, Molchan error diagram, Repeated
gravity measurement, Chinese mainland. ‘æ50‰ñ
u‰‰ŽÒF Dr. Kagan, Yan Y. iDepartment
of Earth and Space Sciences, University of California, Los Angeles (UCLA),
U.S.A. E
Researcherj “ú@ŽžF@2015”N 8ŒŽ
4“ú(‰Î)
16F00 -18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D208‰ï‹cŽº2 TitleF@Statistics of earthquake focal
mechanisms AbstractF@ I.
Double-couple earthquake sourceF symmetry and
rotation We
analyze earthquake focal mechanisms and their forecast both analytically and
statistically. This problem is complex because source mechanisms are
tensor-valued variables, thus their analysis requires applying sophisticated
mathematical and statistical tools, many of which are not yet fully
developed. We describe general and statistical properties of the seismic
moment tensor, in particular, its most important form -- the double-couple
(DC) mechanism. We establish a method for the analysis of a DC source, based
on the quaternion technique, and then apply quaternions for the statistical
analysis of earthquake catalogs. The important property of the focal
mechanism is its symmetry. We describe the classification of the mechanism
symmetry and the dependence of the DC orientation on its symmetry. Four
rotations exist in a general case of a DC with the nodal-plane ambiguity,
there are two transformations if the fault plane is known, and there is one
rotation if the sides of the fault plane are known. A statistical analysis of
symmetrical objects has long been the subject of crystallographic texture
investigations. We describe the application of crystallographic methods to
focal mechanism analysis and consider theoretical statistical distributions
appropriate for the DC orientation approximation. Uniform random rotation
distributions for various DC sources are discussed, as well as two
non-uniform distributionsF the rotational Cauchy and von
Mises-Fisher. We discuss how the parameters of these rotations can be estimated
by a statistical analysis of earthquake source properties in global
seismicity using the GCMT catalog. We also show how earthquake focal
mechanism orientations can be displayed on the Rodrigues vector space. II.
Statistical earthquake focal mechanism forecasts In the
focal mechanism forecast, the sum of normalized seismic moment tensors within
a 1000 km radius is calculated and the P- and T-axes for the predicted focal
mechanism are evaluated on the basis of the sum (Kagan and Jackson 1994,
JGR). Simultaneously we calculate an average rotation angle between the
forecasted mechanism and all the surrounding mechanisms. This average angle
shows tectonic complexity of a region and indicates the accuracy of the
prediction. Recent interest by CSEP and GEM has motivated some improvements,
particularly a desire to extend the previous forecast to polar and near-polar
regions. The major problem in extending the forecast is the focal mechanism
calculation on a spherical surface. In a modified program focal mechanisms
have been projected on a plane tangent to a sphere at a forecast point. A
comparison with the old 75S-75N forecast shows that in equatorial regions the
forecasted focal mechanisms are almost the same, and the difference in the
forecasted focal mechanisms rotation angle is close to zero. However, closer
to the 75 latitude degree the difference in the rotation angle is large
(around a factor 1.5 in some places). The Gamma-index was calculated for the
average focal mechanism moment. A non-zero Index indicates that earthquake
focal mechanisms around the forecast point have different orientations. Thus
deformation complexity displays itself both in the average rotation angle and
in the Index. However, sometimes the rotation angle is close to zero, whereas
the Index is large, testifying to a large CLVD presence. Both new 0.5x0.5 and
0.1x0.1 degree forecasts are posted at httpF//eq.ess.ucla.edu/~kagan/glob_gcmt_index.html III.
Evaluating focal mechanisms forecast skill We
discuss the ways to test the focal mechanism forecast efficiency. We start
with several verification methods, first based on ad-hoc, empirical
assumptions. However their performance is questionable. In the new work we
apply a conventional likelihood method to measure the skill of a forecast.
The advantage of such an approach is that an earthquake rate prediction can
be adequately combined with a focal mechanism forecast, if both are based on
the likelihood scores. This results in a general forecast optimization. To
calculate the likelihood score we need to compare actual forecasts or
occurrences of predicted events with the null hypothesis that the mechanism's
3-D orientation is random. To better understand the resulting complexities we
calculate the information (likelihood) score for two rotational distributions
(Cauchy and von Mises-Fisher), which are used to approximate earthquake
source orientation patterns. We then calculate the likelihood score for
earthquake source forecasts and for their validation by future seismicity
data. Several issues need to be explored when analyzing observational resultsF
their dependence on the forecast and data resolution, internal dependence of
scores on the forecasted angle, and random variability of likelihood scores.
We propose a preliminary solution to these complex problems, as these issues
need to be explored by a more extensive theoretical and statistical analysis. IV.
Future focal mechanisms studies 1.
Statistical earthquake focal mechanism forecasts, rigorous likelihood methods for
evaluating forecast skill. 2.
Likelihood analysis of GCMT catalog, including focal mechanisms. 3.
Focal mechanism clustering. 4.
Collapsing focal mechanism patterns. 5.
Influence of Earth surface on focal
mechanisms interaction (Morawiec,Ch8). 6.
Integrating Cauchy distribution on Rodrigues space, Morawiec, pp.116-119. 7.
Calculating Cauchy and von Mises-Fisher distributions for 120 degrees
rotation limit. 8.
Investigating the sign-change symmetry of a DC earthquake source. 9.
Studies of statistics of earthquake focal mechanisms in the Rodrigues space. 10.
Rotation-translation distribution of double-couples as different from
arbitrary symmetric deviatoric second-rank tensor. ‘æ49‰ñ
u‰‰ŽÒF Prof. Künsch, Hans R. iDepartment
of Mathematics, ETH Zurich, Switzerland E Emeritus
Professorj “ú@ŽžF@2015”N 4ŒŽ
7“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Data assimilation in seismology ? AbstractF@ The
accuracy of weather forecasts has increased substantially over the past
decades. This is due to at least three factorsF The increase in
computing power which allows a higher accuracy in solving the equations of
the underlying physical model, a denser set of observations of the state of
the atmosphere and better methods for data assimilation, that is the use of
these observations to adjust the initial conditions of the model
sequentially. In order to represent the uncertainty in assimilation and
forecasting, ensembles are used whose members represent different states of
the atmosphere that are compatible with the observations and the physical
dynamics. Such ensemble methods should be viewed as Monte Carlo methods which
provide the link to statistics. Recently
there has been interest in using these data assimilation tools also in
seismology in order to improve forecasts and quantify their uncertainty. In
this talk I will discuss some of the attempts in this direction. Time
permitting, I will also present some new ideas for ensemble data
assimilation. ‘æ48‰ñ u‰‰ŽÒF Î•Ó Šx’j i“Œ‹ž‘åŠw’nkŒ¤‹†Š ’nk‰ÎŽRî•ñƒZƒ“ƒ^[ E“Á”CŒ¤‹†ˆõj
“ú@ŽžF@2015”N 2ŒŽ
24“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Overview of Seismicity Changes in
Inland Japan after the 2011 Tohoku-Oki Earthquake and Its Interpretation AbstractF@ In
this presentation, I overview the widespread changes in seismicity rate and
distribution of focal mechanism after the Tohoku-Oki earthquake (Mw9.0) and
summarize the possible contributing factors. In Tohoku, westward from the
Tohoku-Oki source, significant increases in seismicity rate were observed in
N. and S. Akita, SW off Oga peninsula, and Yamagata/Fukushima and
Ibaraki/Fukushima boundary regions as well as other surrounding areas. On the
other hand, aftershock activities in the source regions of recent large
earthquakes such as the 2008 Iwate-Miyagi earthquake have been suppressed. In
Kanto, southwest of the Tohoku-Oki source, interplate earthquakes were
typically activated, while belt-like seismicity along the western edge of
slab-slab contact zone and shallow earthquakes in some areas were also
activated. The
most plausible factor is the static changes in the Coulomb stress, which
seems to be valid for retrospectively forecasting the changes in seismicity
on some level, while some activated seismicity showed clear counter-evidence.
Remotely triggered local events, whose origin times are well coincided with
the arrivals of mainshock seismic waves, suggest that dynamic stress changes
also contribute. Some swarm-like activities, showing temporal expansion of
the focal area which is attributed to fluid diffusion, suggest that changes
in pore fluid pressure should be another possible factor. The contribution of
indirectly triggered earthquakes might be important in some regions because
stress changes imparted by neighboring indirect aftershocks could be
comparable with those from a distant mainshock. Postseisimc slip and
viscoelastic effect would play an important role for long-term hazard
assessments. ‘æ47‰ñ
u‰‰ŽÒF Dr. Segou, Margaret iNational
observatory of Athens, Greece E Researcherj
“ú@ŽžF@2015”N 2ŒŽ
10“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW TitleF@The Future of Earthquake
Predictability AbstractF@ The
last decade dense seismological networks around the world provide the
opportunity to study more aftershock sequences in seismically active areas
across the world such as California (San Andreas Fault), Japan, New Zealand
(Canterbury Fault, Christchurch) and continental rift systems (Corinth Gulf,
Greece). The importance behind that is evident; the 2008 M7.9 Sichuan event
continues having catastrophic aftershocks (2013 Lushan M6.6) after five
years. The above provide the necessary motivation for geophysicists to
develop short and long-term earthquake forecasts for providing to scientists
and the public authoritative information on seismic hazard and answer
ultimately the question When the next big earthquake will occur. Static and
dynamic triggering are often described as the two primary mechanisms for
earthquake clustering in time and space. Recent studies have provided
evidence that physics-based earthquake forecast models, combining fault aging
laws and the static stress triggering hypothesis, can accurately predict
(80%) transient seismicity rates. Static triggering plays an important role
in spatial clustering at distances 2-3 rupture lengths away from the seismic
source whereas dynamic triggering studies usually focus on larger distances
(>1000 km). But how dependent are our calculations on our incomplete
knowledge of the ambient stress of a region? What are the implications behind
the time dependent fault behavior? The last two questions are the key for
reducing the uncertainties of physical forecast models. Quite often the
development of such quantitative and testable models is followed by extensive
statistical performance evaluation, which is critical for understanding their
merits and pitfalls. In
this seminar I focus on recent development on physics-based earthquake models
using worldwide examples and how they compare with statistical models.
Furthermore, I discuss how we can reduce their uncertainties and sketch the
future of our scientific predictability. Is it possible to hope on higher
information gains in the near future? and, How these forecast models could be
most effective in Japan? ‘æ46‰ñ
-1 u‰‰ŽÒF ŒFàV ‹M—Y i“Œv”—Œ¤‹†Š ƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg
E“Á”CŒ¤‹†ˆõj “ú@ŽžF@2015”N 1ŒŽ
27“ú(‰Î) 15F00
-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Predicting Offshore Swarm Rate by
Volumetric Strain Changes in Izu Peninsula, Japan AbstractF@ The
eastern offshore of Izu peninsula is one of the well known volcanic active
regions in Japan, where magma intrusions have been observed several times
since 1980s monitored by strain-meters located nearby. Major swarm activities
have been synchronously associated with coseismic and preseismic significant
sizes of volumetric strain changes (Earthquake Research Committee,
2010). We investigated the
background seismicity changes during these earthquake swarms using the
nonstationary ETAS model (Kumazawa and Ogata, 2013, 2014), and have found the
followings. The volumetric strain change data, modified by removing the
effect of earth tides and precipitation as well as removing coseismic jumps,
have much higher cross-correlations to the background rates of the ETAS model
than to the whole seismicity rate change of the ETAS. Furthermore the strain
changes precede the background seismicity by lag of about half a day. This
relation suggests an enhanced prediction of earthquakes in this region using
volumetric strain measurements. Thus we propose an extended ETAS model where
the background seismicity rate is predicted by the time series of preceding
volumetric strain changes. Our numerical results for Izu region show
consistent outcomes throughout the major swarms. -2
u‰‰ŽÒF Dr. ‰¤ 婷iWang, Tingj iDepartment
of Mathematics and Statistics, University of Otago, New Zealand E Lecturerj “ú@ŽžF@2015”N 1ŒŽ
27“ú(‰Î) 16F30
-17F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Marked point process modeling with
missing data in volcanic eruption records AbstractF@ Despite
ongoing efforts to compile new data, eruption records, particularly those of
earlier time periods, are pervasively incomplete. The probability of missing
an ancient eruption is much higher than a recent eruption. We consider
modeling both the times and sizes of the eruptions using a marked point
process. We propose to model the marks (the sizes of the events) as having a
time-varying distribution which takes the higher proportion of missing
smaller events in earlier records into consideration. We then estimate the
proportion of detected events over time based on the assumption that the most
recent record is complete and that the record of eruptions with the largest
size in the considered catalog is complete. With this information, we can
then estimate the true intensity of volcanic eruptions. ‘æ45‰ñ
u‰‰ŽÒF Dr. Schehr, Grégory iLaboratoire
de Physique Théorique et Modèles Statistiques, Orsay-University Paris-Sud,
France E “ú@ŽžF@2014”N 10ŒŽ
14“ú(‰Î) 11F00
-12F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Exact Statistics of the Gap and Time
Interval Between the First Two Maxima of Random Walks and Lévy Flights AbstractF (PDF) ‘æ44‰ñ
-1 u‰‰ŽÒF Prof. Žü@Žd—E
(Zhou, Shiyong) i–k‹ž‘åŠw ’n‹…‹óŠÔ‰ÈŠwŠw‰@ E‹³Žöj
“ú@ŽžF@2014”N 8ŒŽ
5“ú(‰Î)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Seismicity simulation in Western
Sichuan of China based on the fault interactions and its implication on the
estimation of the regional earthquake risk AbstractF@ Seismicity
over 10000 years in Western Sichuan of China has been simulated based on the
mechanical synthetic seismicity model we developed. According to the analysis
of the simulated synthetic seismic catalogue , the occurrence of strong
earthquakes with Ms ≥710 in the whole region of Western Sichuan is rather
random , very close to the Poisson process with seismic rate 010454Pyear ,
which means it is reasonable to estimate the regional earthquake risk with
Poisson model in Western Sichuan. However, the occurrence of strong
earthquakes with Ms ≥710 on the individual faults of Western Sichuan is far
from Poisson process
and could be predicted with a time2dependent prediction model. The fault
interaction matrices and earthquake transfer possibility matrices among the
faults in Western Sichuan have been calculated based on the analysis of the
simulated synthetic catalogues. We have also calculated the static change in
Coulomb failure stress (CFS) on one fault induced by a strong earthquake on
another fault in Western Sichuan to discuss the physical implications of the
earthquake transfer possibility matrices inferred from the synthetic
catalogue. KeywordsF
Simulation of earthquake generation , Poisson model , Coulomb stress ,
Seismic hazard -2
u‰‰ŽÒF Dr. ‰¤ 墩 (Wang, Dun) i“Œ‹ž‘åŠw’nkŒ¤‹†Š^“ú–{ŠwpU‹»‰ï E “Á•ÊŒ¤‹†ˆõj
“ú@ŽžF@2014”N 8ŒŽ
5“ú(‰Î)
17F00 -18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Rupture speeds for recent large
earthquakes AbstractF@ Studying
the rupture speeds of earthquakes is of broad interesting for earthquake
research because it has a large effect on the strong near-field shaking that
causes damage during earthquakes. Also rupture speed is a key observation for
understanding the controlling stresses and friction during an earthquake, yet
the speed and its variations are usually difficult to determine. Using only
far-field seismic waveforms, which is the only data available for many large
earthquakes, there are problems for estimating the rupture speed with
standard waveform inversions, due to trade-off between the rupture speed and
the slip location. Here
we applied a back projection method to estimate the rupture speeds of Mw ≥
7.5 strike-slip earthquakes since 2001 which could be analyzed using Hi-net
in Japan. We found that all events had very fast average rupture speeds of
3.0-6.0 km/s, which are near or greater than the local shear wave velocity
(supershear). These values are faster than for thrust and normal faulting
earthquakes that generally rupture with speeds of 1.0-3.0 km/s. Considering
the depth-dependent shear-wave velocity, the average propagation speeds for
all of the strike-slip events are closer to or greater than the shear wave
velocity. For large strike-slip events, transition from subshear to
supershear usually occurs within distances of 15 to 30 km from the
initiation, which is probably the reason for the scarcity of observed
supershear earthquakes for smaller magnitudes. Earthquakes
with supershear ruptures can cause more damage than events with subshear
ruptures because of the concentration of energy in the forward direction of
the rupture. Numerical modeling shows strong focusing and other effects of
energy at the rupture front which can intensify the ground motions. A recent
example is the April 13, 2010 Qinghai, China earthquake (Mw 6.9), where a
moderate-size event caused extensive damage in the Yushu region at the
southeastern end of the fault. Careful evaluation of long and straight
strike-slip faults should be emphasized for predicting strong ground motions
due to supershear rupture. ‘æ43‰ñ
u‰‰ŽÒF Dr. Aiken, Chastity iGeorgia
Institute of Technology, U.S.A. E National
Science Foundation Graduate Fellow, ARCS Scholarj “ú@ŽžF@2014”N 7ŒŽ
8“ú(‰Î)
16F00 -18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Triggered Seismic Activity in
Geothermal Regions and on Strike-Slip Faults AbstractF@ Dynamic
stresses caused by large earthquakes are capable of triggering a wide range
of seismic/aseismic responses at remote distances. These responses include
instantaneously triggered microearthquakes, deep tectonic tremor, earthquake
swarms, slow-slip events, and near-surface icequakes. Systematic studies of these triggered
phenomena not only help us to understand how large earthquakes affect
seismic/aseismic processes at remote distances but also help improve our
understanding of the necessary physical conditions responsible for the
generation of seismic activity.
In this talk I present two recent studiesF (1) a comparison of triggered
microearthquakes in three geothermal regions of California and (2) a
comparison of triggered tremor on four strike-slip faults in the Western
Hemisphere. Triggered seismic
activity is characterized as being triggered by the surface waves of large,
distant earthquakes. Triggered
earthquakes in geothermal regions are generally small magnitude (M<4) and
have distinct P and S waves, whereas triggered tremor is a low-amplitude,
emergent signal with no distinct P wave.
After identifying the large earthquakes that trigger seismic activity,
we analyze and compare the peak ground velocities, seismic wave incidence
angles, amplitude spectra of all distant earthquakes we examined, as well as
the background activity in each region to determine the factors that promote
triggering in geothermal regions and on strike-slip faults. ‘æ42‰ñ
u‰‰ŽÒF Dr. Aranha, Claus i’}”g‘åŠw‘åŠw‰@ƒVƒXƒeƒ€î•ñHŠwŒ¤‹†‰È E •‹³j
“ú@ŽžF@2014”N 5ŒŽ
27“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Using Evolutionary Algorithms to
optimize earthquake risk modelsF
Early Ideas AbstractF@ Evolutionary
Algorithms are a class of meta-heuristics that use genetic principles to
sample good solutions from a search space. They have shown great promise in a
wide variety of optimization problems, specially in problem domains where the
search space is multi modal and/or non-continuous. However, evolutionary
algorithms have not yet seen a lot of use in the optimization of statistical
models for earthquake forecasting. Our goal is to explore this combination. In
this talk, we will (briefly) explain what evolutionary algorithms are, and
then proceed to outline our early proposals and results regarding their use
for the generation of an RELM based earthquake forecast model. ‘æ41‰ñ -1 u‰‰ŽÒF Dr. ‰¤ •q^iWang, Min-Zhenj i“Œv”—Œ¤‹†Š ƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[
E “Á”CŒ¤‹†ˆõj “ú@ŽžF@2014”N 3ŒŽ
4“ú(‰Î)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Distributions on Torus, Cylinder and
Disc iWang, Min-Zhen and Shimizu, Kunioj AbstractF@ Statistics
for data which include angular observations is known as directional
statistics. Bivariate circular data such as wind directions measured at two
points in time are modeled by using bivariate circular distributions or
distributions on the torus. Likewise circular-linear data are modeled by
using distributions on the cylinder and disc. We propose some extensions of
distributions on the torus, cylinder and disc in the framework of directional
statistics.@A new
circular distribution (Wang and Shimizu, 2012) is also introduced, which is
obtained by applying the Mӧbius
transformation to a univariate cardioid random variable. The distribution
function, trigonometric moments, and conditions for unimodality and symmetry
are studied. Kato and Jones (2010) study a family of distributions which is
obtained by applying the Mӧbius
transformation to a von Mises random variable, and we discuss the
relationship between our model and the Kato--Jones model. The bivariate
circular case (Wang and Shimizu, 2012) which is generated from a
circular-circular structural model linked with Mӧbius
transformation or a method of trivariate reduction. The joint probability
density function, trigonometric moments and circular-circular correlation
coefficient are explicitly expressed. An illustration is given for wind
direction data at 6 a.m. and noon as an application of the bivariate cardioid
distribution. The distributions on the cylinder we proposed is generated from
a combination of von Mises and transformed Kumaraswamy distributions. It is
an extension of the Johnson and Wehrly (1978) model. The marginal and conditional
distributions of the proposed distribution are given. A distribution using
the method of generating a cylindrical distribution with specified marginals
is also proposed. We generate skew or asymmetric distributions on the disc by
using the Mӧbius transformation and modified Mӧbius
transformations as extensions of the Mӧbius
distribution proposed by Jones (2004). The new distributions called the
modified Mӧbius distributions have six parameters.
They can be reduced to the Mӧbius and uniform
distributions as special cases, but many members of the family are skew
distributions for both the linear and the angular random variables. Some
properties such as the joint probability and marginal density functions of
the proposed distributions are obtained. ReferencesF [1] Johnson, R.
A. and Wehrly, T. E. (1978). Some angular-linear distributions and related
regression models. Journal of the
American Statistical Association, 73, 602–606. [2]
Jones, M. C. (2004). The Mӧbius
distribution on the disc. Annals of the
Institute of Statistical Mathematics, 56, 733–742. [3] Kato, S. and
Jones, M. C. (2010). A family of distributions on the circle with links to,
and applications arising from, Mӧbius
transformation. Journal of the American
Statistical Association, 105, 249–262. [4]
Wang, M.-Z. and Shimizu, K. (2012). On applying Mӧbius
transformation to cardioid random variables. Statistical Methodology, 9, 604–614. -2 u‰‰ŽÒF Dr. Llenos, Andrea L. iUS
Geological Survey, Earthquake Science Center E
Postdoctoralj “ú@ŽžF@2014”N 3ŒŽ
4“ú(‰Î)
16F30 -17F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Statistical modeling and
identification of potentially induced seismicity rate changes AbstractF@ iTBAj ‘æ40‰ñ
-1 u‰‰ŽÒF Dr. Varini, Elisa iInstitute
of Applied Mathematics and Information Technology, National Research Council
(IMATI-CNR), Italy E “ú@ŽžF@2014”N 2ŒŽ
18“ú(‰Î) 15F00
-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Bayesian estimation of doubly
stochastic Poisson processesF a
particle filtering approach AbstractF@ iTBAj -2 u‰‰ŽÒF Dr. Harte, David iGNS
Science, New Zealand E Statistical Seismologist
and Hazard Modellerj “ú@ŽžF@2014”N 2ŒŽ
18“ú(‰Î) 16F30
-17F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Stochastic Earthquake ModelsF Ways to Improve and Insights into the
Physical Process AbstractF@ 1.
Bayesian estimation of doubly stochastic Poisson processesF a
particle filtering approach We
aim to explore the hypothesis that the earthquakes of a seismic region occur
under different physical conditions, corresponding to as many seismicity
phases characterized by different occurrence rates. This
hypothesis can be modeled by doubly stochastic Poisson processes in which the
observed process of the occurrence times of the earthquakes is a point
process whose conditional intensity function is assumed to be dependent on
both the past history and the current hidden state. By
assuming some of the possible choices for the observed point process and the
hidden state process, a Bayesian analysis is carried out in which the
likelihood function is approximated by the particle filtering method. 2.
Stochastic Earthquake ModelsF Ways to Improve and Insights into the
Physical Process We
present a version of the ETAS model where the offspring rates vary both
spatially and temporally. This is in response to deficiencies discussed in
[1]. This is achieved by distinguishing between those space-time volumes
where the interpoint space-time distances are small, and those where they are
considerably larger. In the process of modifying a stochastic earthquake
model, one needs to justify assumptions made, and these in turn raise
questions about the nature of the underlying physical process. We will use
this version of the ETAS model as the basis for our discussion, and by
focussing on aspects where the model does not perform so well, attempt to
find physical explanations for such lack of fit. Some possible discussion
points are as follows. What
is the nature of the so called background process in the ETAS model? Is it
simply a temporal boundary (t=0) correction or does it represent an
additional tectonic process not described by the aftershock component? Or are
these two alternatives on completely different time scales? An
epidemic (the basic analogy underpinning the ETAS model), or a living
organism, can evolve by reproducing offspring that are slightly different to
that of their parents - randomness or gene mutation. Certain
"modified" individuals will be able to adapt to the environment
better and tend to survive over others. In the ETAS context, a lower value of
$\alpha$ will cause more "generations" in the aftershock sequence.
This allows for a richer and more complex evolution of the process, both
spatially and temporally. Alternatively, if alpha is large, then more of the
aftershocks are direct offspring of the mainshock. In the epidemic context,
this implies that the mainshock contains much more of the "DNA"
which governs the evolution of the overall sequence. What
is the relationship between fractal dimension and clustering? Does the
fractal dimension provide a better discrimination between those space-time
volumes with higher offspring rates and the others? If so, does the fractal
dimension provide a more obvious physical description of the difference
between these high rate volumes and the lower rate volumes, and hence a
suggestive physical explanation? [1]
Harte, D.S. (2013). Bias in Fitting the ETAS ModelF A
Case Study Based on New Zealand Seismicity. Geophys. J. Int. 192(1), 390-412. ‘æ39‰ñ
-1 u‰‰ŽÒF ¼‰Y [G i“Œv”—Œ¤‹†Š ƒŠƒXƒN‰ðÍí—ªŒ¤‹†ƒZƒ“ƒ^[
’nk—\‘ª‰ð̓vƒƒWƒFƒNƒg E
ŠO—ˆŒ¤‹†ˆõj “ú@ŽžF@2014”N 1ŒŽ
14“ú(‰Î) 13F00
-14F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Inversion of GPS Data using ABIC AbstractF@ To
monitor crustal movements of the Japanese Islands, a nation-wide dense GPS
network (GEONET) has been operated by Geographical Survey Institute of Japan
(now Geospatial Information Authority of Japan) since 1996. We developed an
inversion method to estimate unbiased interseismic slip-deficit rates at
plate interfaces from GPS displacement rate (velocity) data with an elastic
dislocation model. In this method, first, we subtract theoretical surface
velocities due to known steady relative plate motion from the observed GPS
data, and presume the residuals to be caused by slip deficit at plate
interfaces. However, the observed GPS data always include rigid block
translation and rotation, which cannot be explained by the elastic
dislocation model. We treated the rigid block translation and rotation as
systematic errors in the analysis, and removed them by transforming the
velocity data into the average strain rates of triangle elements composed of
adjacent GPS stations. By this transformation, original information about
intrinsic deformation is preserved. Applying a general inversion formula
using ABIC to the GPS strain data, we can obtain unbiased slip-deficit rate
distribution. We demonstrate the applicability of the GPS strain data
inversion method through the analyses of coseismic and interseismic GPS data
in the Japan region, where the North American, Pacific, Philippine Sea, and
Eurasian plates are interacting with each other in a complicated way. -2 u‰‰ŽÒF Prof. Žü@Žd—E
(Zhou, Shiyong) i–k‹ž‘åŠw ’n‹…‹óŠÔ‰ÈŠwŠw‰@ E‹³Žöj
“ú@ŽžF@2014”N 1ŒŽ
14“ú(‰Î) 14F10
-15F10 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Detecting the regional tectonic stress
variations in background seismicity data through statistical earthquake
modeling
iYajun
Peng, Shiyong Zhou, Jiancang Zhuang and Jia Shij AbstractF@ Large
earthquakes could perturb the stress field in regions even thousands of
kilometers away, leading to abrupt changes in background seismicity. We have
developed a probability based approach, based on the epidemic-type aftershock
sequence model and the stochastic declustering method, to invert the smoothed
temporal variation of background seismicity rate and to extract useful
physical signals from complex seismicity patterns. An iterative algorithm is
constructed to estimate the background seismicity simultaneously by using a
combination of maximum likelihood estimate and weighted variable kernel
estimate. We verify this approach through simulations and confirm that it can
sensitively recover the onset of dynamic triggering. The algorithm is
applied to an earthquake catalog in Yunnan Province, China, and successfully
identifies a rapid increment of background seismicity rate following the
occurrence of the 2004 Sumatra Mw 9.2 earthquake, whereas no remote
triggering effect is detected following the occurrence of the 2005 Sumatra Mw
8.7 earthquake. This phenomenon agrees with GPS observations. It is found
that the elevated seismic activity within 15 d after the Sumatra earthquake
is mostly composed by shallow events, and direct triggering relationship is
well established. We also studied
the possible dynamic triggering effect in Northern China, including Tangshan
area, when the Japan Tohoku Mw 9.0 earthquake happened at March 11th, 2011
and found out whether the area with large co-seismic displacement would have
sudden abnormal seismicity increase. As a result, the Japan Tohoku earthquake
has little effect on the total and background seismicity of Tangshan area,
which means that the seismic structure of Tangshan area is fundamentally
stable. -3 u‰‰ŽÒF Dr. ‰¤ 婷iWang, Tingj iUniversity
of Otago, New Zealand E•‹³j “ú@ŽžF@2014”N 1ŒŽ
14“ú(‰Î) 15F30
-16F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Estimating the likelihood of volcanic
eruptions with incomplete eruption record AbstractF@ iTBAj -4 u‰‰ŽÒF ”öŒ` —Ç•F i“Œv”—Œ¤‹†Š E
–¼—_‹³Žö^“Œ‹ž‘åŠwE‹qˆõ‹³Žöj “ú@ŽžF@2014”N 1ŒŽ
14“ú(‰Î) 16F40
-17F40 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Foreshocks and short-term forecastingF comparisons between in real
seismicity and synthetic catalogs
iYosihiko Ogata and Koichi Katsuraj AbstractF@ Some
statistical characteristics of foreshocks in the JMA earthquake catalog are
quantitatively different from those in the catalogs simulated by the
space-time epidemic-type aftershock sequence (ETAS) model associated with the
Gutenberg-Richter (GR) law. Also, the information gain of a foreshock
probability forecasting in the real seismicity is significantly large in
comparison with in synthetic catalogs. ‘æ38‰ñ
u‰‰ŽÒF Prof. ‰© ´‰ØiHuang, Qinghuaj i–k‹ž‘åŠw —˜_‰ž—p’n‹…•¨—Œ¤‹†ŠE‹³Žöj
“ú@ŽžF@2013”N 11ŒŽ
12“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº6 TitleF@Seismicity changes revealed by the
Region-Time-Length (RTL) algorithm AbstractF@ The
Region-Time-Length (RTL) algorithm, which takes into account the epicenter,
time, and magnitude of earthquakes, is an effective technique in detecting
seismicity changes, especially the seismic quiescence. Based on the RTL
algorithm and the Q-parameter (an average RTL parameter over a certain time
window), we can quantify the spatio-temporal characteristics of seismicity
changes. In order to reduce the possible ambiguity due to the selection of
model parameters in the RTL algorithm, we proposed an improved technique of
searching for the optimal model parameters. The applications of the RTL
algorithm in various tectonic regions indicated that seismic quiescence
anomalies generally started a few years prior to the occurrence of the
mainshock. The linear dimension of the seismic quiescence zone could be a few
to several times of the rupture dimension of the mainshock. The significance
of the seismic quiescence anomalies revealed by the RTL algorithm was
supported by the close investigations of model parameter effects and the
stochastic test based on randomized earthquake catalogs. ‘æ37‰ñ
u‰‰ŽÒF Herrmann, Marcus iETH
Zurich (ƒ`ƒ…[ƒŠƒbƒqH‰È‘åŠwEƒXƒCƒX), Swiss
Seismological Servise E”ŽŽmŒãŠú‰Û’öi‘åŠw‰@¶jj “ú@ŽžF@2013”N 8ŒŽ
27“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Forecasting Losses Caused by a M6.6
Scenario Earthquake Sequence in Basel, Switzerland AbstractF@ When
people and their environment are not properly prepared, earthquakes pose a
serious threat. Recently, the SEISMO-12 earthquake scenario exercise
simulated the repeat of the 1356 Basel earthquake. This gave officials,
organizations, and the general public an idea of what may be expected in case
of a M6.6 earthquake. The present work relates to the scenario and
contributes to loss reduction by expressing the potential impact through
seismic risk. Reducing the short-term seismic risk requires the evacuation of
vulnerable buildings. However, one cannot always evacuate in times of an
ongoing seismic sequence. Based on information of the continuous seismicity,
probabilistic forecasts show increasing benefit for short-term defense
against earthquakes. Forecast probabilities subsequently allow time-varying
seismic hazard calculation. Only another combination with time-invariant loss
estimation permits the assessment of short-term seismic risk. Seismic risk
delivers a more direct expression of the socio-economic impact than seismic
hazard, but one must characterize vulnerability and exposure to estimate
risk. Risk assessment brings together a variety of data, models and
assumptions. Based on the specific earthquake scenario, I perform a
probabilistic forecast of human losses. Seismologists may not be responsible
for communicating short-term risk information to the public, but they have to
support decision-makers to take worthwhile actions that may save lives.
However, the low-probability environment and the complexity of involved
processes challenge decision-makers. A final cost--benefit analysis
constitutes greater benefit than pure statistical approaches by providing
objective statements that may justify evacuations. To deliver supportive
information in the simplest reasonable form, I propose a warning approach ---
in terms of alarm levels --- which allows one to explore worthwhile
mitigation actions for each district of the Basel region. ‘æ36‰ñ
u‰‰ŽÒF Dr. Enescu, Bogdan i’}”g‘åŠw
¶–½ŠÂ‹«Œn’n‹…i‰»‰ÈŠwêU Ey‹³Žöj
“ú@ŽžF@2013”N 7ŒŽ
23“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313ƒZƒ~ƒi[Žº5 TitleF@Dynamic triggering of earthquakes in
Japan due to the 2011 Tohoku-oki earthquakeF
some observations, stress modeling and interpretation AbstractF@ iTBAj ‘æ35‰ñ
u‰‰ŽÒF Dr. Marzocchi, Warner iIstituto
Nazionale di Geofisica e Vulcanologia, Rome, Italy E Chief scientistj “ú@ŽžF@2013”N 7ŒŽ
3“ú(…)
16F00 -16F40 ê@ŠF@“Œv”—Œ¤‹†Š@D313ƒZƒ~ƒi[Žº5 TitleF@Operational Earthquake Forecasting and
Decision Making AbstractF@ Traditionally,
seismic risk reduction is achieved only through a sound earthquake building
code. Nonetheless, some recent seismic disasters have highlighted the need
for enlarging the range of risk mitigation actions beyond that. In
particular, the occurrence of a seismic sequence may increase the weekly
probability of a large shock by orders of magnitude, although the absolute
probability usually remains below 1/100. Here, we summarize the state of the
art in short-term earthquake forecasting and discuss how these forecasts may
be used to mitigate seismic risk in this time horizon. Because of the low
probabilities and high false alarm rates of possible advisories, mandatory
mitigation actions would not be an effective practical strategy to reduce
risk. Alternatively, we propose some low cost strategies, such as increasing
vigilance and preparedness, for using probabilistic forecasting to mitigate
seismic risk. These are based on the enudgingf principle of devolving
decision-making down from civic authorities to the individual level. ‘æ34‰ñ
u‰‰ŽÒF Dr. Chen, Xiaowei iScripps
Institution of Oceanography, University of California, San Diego E Post-Doctoral Fellowj “ú@ŽžF@2013”N 4ŒŽ
16“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Aspects of earthquake triggering and
seismicity clustering AbstractF@ Earthquakes
strongly cluster in space and time, driven both by earthquake-to-earthquake triggering
and underlying physical processes, such as tectonic stress loading, increased
pore pressure, etc. To
understand the general characteristics of earthquake clustering from a large
dataset of earthquakes, I analyze seismicity in southern California. I use a
high-resolution earthquake catalog based on waveform cross-correlation to
study the spatial-temporal distribution of earthquakes. Parameters based on
event location, magnitude and occurrence time are computed for isolated
seismicity clusters. Spatial migration behavior is modeled using a
weighted-L1-norm method. Aftershock-like event clusters do not exhibit
significant spatial migration compared with earthquake swarms. Two triggering
processes are considered for swarmsF slow slip and
fluid diffusion, which are distinguished based on a statistical analysis of
event migration. The results suggest fluid-induced seismicity is found across
southern California, particularly within geothermal areas. In the Salton Sea
geothermal field (SSGF), a correlation between seismicity and fluid injection
activities is seen. Spatial-temporal variations of earthquake stress drops
are investigated in different regions, and a distance-dependence of stress
drop from the injection source is found in the SSGF, suggesting the influence
of increased pore pressure. Temporal variation of stress drops within
mainshock source regions shows that foreshocks and earthquake swarms have
lower stress drops than background seismicity and aftershocks. These results,
combined with the spatial migration observed for some large foreshock
sequences, suggests an aseismic transient process is likely involved in
foreshock triggering. ‘æ33‰ñ
u‰‰ŽÒF Dr. ŠØ –QiHan, Pengj iç—t‘åŠwE‘åŠw‰@¶j “ú@ŽžF@2013”N 2ŒŽ
14“ú(–Ø) 16F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Investigation of ULF seismo-magnetic
phenomena in Kanto, Japan during 2000 – 2010 AbstractF@ Earthquakes
are one the most destructive natural hazards, causing huge damages and high
casualties. Especially during the past decade, huge/mega earthquakes have hit
many countries. Thus, effective earthquake forecasting is important and
urgent. Since the end of last century, ULF seismo-magnetic phenomena have
been intensively studied. Recently, it has been considered a promising
candidate for short-term earthquake prediction as a number of case studies
have been reported. However, scientists also found that
sometimes a sizeable earthquake happened without magnetic anomalies and
sometimes magnetic anomalies followed by no earthquakes. Thus, the relation
between magnetic anomalies and seismicity has been queried. Moreover, there
are two essential problems puzzling the researchersF (1)
what is the exact waveform of electro-magnetic signals associated with
earthquakes or underground activities; (2) how are the signals generated.
These two questions have not yet been answered clearly and fully. There are
still active debates in the geophysical community on the
seismo-electromagnetic phenomena. In order to verify, clarify, and evaluate
the ULF seismo-magnetic phenomena, long-term continuous monitoring of ULF
magnetic field in a seismically active area is required. Therefore, a
sensitive observation network has been established in Kanto-Tokai area since
the year 2000. Based on eleven yearsf observation, plenty of geomagnetic data
have been accumulated, which provides an excellent opportunity to find
answers to the questions above. Thus, in this study I have conducted an
investigation of ULF seismo-magnetic phenomena in Izu and Boso Peninsulas,
Japan, based on the data observed from 2000-2010. First, case studies of major events
have been applied. Energy of ULF geomagnetic signals at the frequency around
0.01 Hz has been investigated by wavelet transform analysis. In order to
minimize the influences of artificial noises, only the midnight time data (LT
1F00 ~
4F00)
have been utilized. To indentify anomalous changes from ionospheric
disturbances, the standard station Memabutsu has been chosen as a reference
station. (1) Case studies of the 2000 Izu Islands earthquake swarm have
indicated that there are unusual geomagnetic energy enhancements in vertical
component before and during the earthquake swarm. (2) Case studies of the
2005 Boso M 6.1 earthquake have also shown clear geomagnetic energy
enhancements in vertical component before the earthquake. (3) Case studies of
the 2002 and 2007 slow slip events have demonstrated that there are
geomagnetic energy enhancements in both vertical and horizontal components
during the slip events. Then,
to verify and clarify the relation between ULF geomagnetic anomalies and
seismicity, statistical studies by superposed epoch analysis (SEA) have been
carried out. The results have indicated that before a sizeable earthquake
there are clearly higher probabilities of ULF anomalies than after the
earthquakeF for
Seikoshi (SKS) station in Izu, about 20~30 days before, one week and few days
before, and one day after the event statistical results of daily counts are
significant; for Kiyosumi (KYS) station in Boso around two weeks before, few
days before, and one day after the event. Finally, to find out the detailed
waveform of anomalous magnetic signals, waveform analysis has been performed.
The results show that there are mainly two kinds of seismo-magnetic
signature. (1) Noise-like signalsF Compared with
the background, the signals exhibit small increases of amplitudes at a wide
frequency range. (2) Transient/quasi-rectangular signalsF the
signals have transient/quasi-rectangular waveforms with amplitudes of several
nT (~ 10-9 T). The noise-like signals usually persist for several
days or even a few weeks, and are mainly associated with large earthquakes;
the transient/quasi-rectangular signals have durations of few seconds to few
ten seconds, and are registered mainly during slow slip events. Based on the results obtained above,
we conclude thatF (1) there is a correlation between ULF
geomagnetic anomalies and local sizeable earthquakes in Izu and Boso Peninsulas,
Japan, and the common period of significant results is few days before and
one day after a sizeable earthquake; (2) there are mainly two kinds of
seismo-magnetic signature registered in Izu and Boso PeninsulasF
noise-like signals and transient/quasi-rectangular signals. The mechanisms of
the anomalous geomagnetic signals are still unclear and need further studies. ‘æ32‰ñ
u‰‰ŽÒF Prof. Savage, Martha iSchool
of Geography, Environment and Earth Sciences, Victoria University of
Wellington, New Zealand E Professorj “ú@ŽžF@2012”N 11ŒŽ
13“ú(‰Î) 16F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Towards Predicting Earthquakes and
Volcanic Eruptions using Statistical Techniques AbstractF@ Predicting
natural hazards is fraught and statistical techniques are necessary to put
such studies on a firm standing.
Here we discuss two methods that we have applied to volcanic areas. Analysis of the rates of earthquake
activity (CURATE) is used to determine the characteristics of earthquake
swarms to try to determine how they develop over time. The technique compares favourably to
other declustering techniques and allows us to consider whether some swarms
are triggered by underlying processes that create diffuse seismicity that is
not well modelled by Omorifs law. We also analyse waveforms of earthquakes to
determine seismic anisotropy, which depends upon stress orientation and
magnitude, which in turn can be influenced by earthquake and volcanic
activity. Seismic waves travel
faster when their particle motion is along the cracks, which orient with the
principal stress direction. At
volcanoes around the world, we discovered significant changes in seismic
anisotropy strength and orientation that correlate with magma movement. Detecting and evaluating such changes
is complicated by scattered measurements, which sometimes have 90 degree
ambiguities and we have been considering ways to make the techniques more
robust. These observations will
provide the data that may eventually lead to prediction tools. ‘æ31‰ñ
u‰‰ŽÒF ‹ß] ’G i“Œ‹ž‘åŠw JST E ”ŽŽmŒ¤‹†ˆõj
“ú@ŽžF@2012”N 10ŒŽ
25“ú(–Ø) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW TitleF@A state-space model for estimating the
time-varying detection rate of earthquakes and its application to immediate
probabilistic prediction of aftershocks AbstractF@ After
a large earthquake, the detection rate of earthquakes temporarily decreases,
and a lot of earthquakes are missed from a catalog. Such incompleteness of
the catalog prevents us from estimating statistical models of aftershock
activity accurately. To overcome this difficulty, Ogata and Katsura (2005)
modeled the incomplete catalog by using a parametric model of a time-varying
detection rate of earthquakes. In
this talk, we propose a state space model for estimating the time-varying
detection rate. In our model, the estimation problem is recursively solved,
by using Kalman filter and a Gaussian approximation of the posterior
probability distribution. Thus our model has an advantage in real-time
computation. Finally our model is combined with the Omori-Utsu law to predict
the occurrence rate of underlying aftershocks. We present some results on the
immediate probabilistic prediction of aftershocks. ‘æ30‰ñ
u‰‰ŽÒF Dr. ‰¤ 婷iWang, Tingj iUniversity of Otago, New Zealand E Lecturerj “ú@ŽžF@2012”N 7ŒŽ
24“ú(‰Î) 16F00
-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 TitleF@Hidden Markov models in modelling
earthquake data AbstractF@ Earthquakes
are processes in which the internal workings (such as the accumulation of
tectonic stress) are only observed indirectly, although the final effects are
all too observable! Hidden Markov models (HMMs, a general statistical
framework for modelling partially observed systems) are an intuitively
attractive idea for analysing seismicity. I will briefly introduce the idea
of using HMMs to investigate earthquake cycles, and then focus on one case
study incorporating GPS data into earthquake forecasting. A new
model we developed, the Markov-modulated Hawkes process with stepwise decay
(MMHPSD), can capture the cyclic parent-generating-offspring feature of the
temporal behaviour of earthquakes. The decomposition of the earthquake cycle
motivated the construction of a non-linear filter measuring short-term
deformation rate-changes to extract signals from GPS data. This filter was
applied to a) deep earthquakes in central North Island, New Zealand, and b)
shallow earthquakes in Southern California. The study examines the use of
HMMs to extract possible precursory information that indicates an elevated
probability of large earthquakes occurrence. This study is controversial and
still requires further tests. Japan is an ideal place to carry out this test.
‘æ29‰ñ
-1 u‰‰ŽÒF Prof. Daley, Daryl iDepartment
of Mathematics and Statistics, The University of Melbourne E –¼—_‹³Žöj
“ú@ŽžF@2012”N 7ŒŽ
6“ú(‹à)
14F30 -15F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Dimension walks and Schoenberg spectral
measures for isotropic random fields AbstractF@ Schoenberg
(1938) showed how Bochner's basic representation theorem for positive
definite functions (e.g. correlation function of a stationary stochastic
process) `simplifies' for spatial processes (d-dimensional random fields)
which are isotropicF the standard Fourier kernel function is
replaced by the characteristic function of a random direction in d-space and
the spectral measure, instead of being on d-space, is on the positive
half-line. The
talk describes how Wendland's `dimension walks', which were defined earlier
by Matheron as Descente and Montee in studying relations between d-D and
either (d+2)-D or (d-2)-D correlation functions, are equivalent to simple
modifications of their d-Schoenberg measures. Another
family of dimension walks arises from projections from unit d-spheres to
lower dimensional spheres, first via the kernel functions in the Schoenberg
representation and then more generally, for d-Schoenberg measures. -2 u‰‰ŽÒF Dr. Adrian Baddeley iCSIRO
Mathematics, Informatics & StatisticsEResearch
Scientistj “ú@ŽžF@2012”N 7ŒŽ
6“ú(‹à)
15F30 -16F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Leverage, influence and residual
diagnostics for point process models AbstractF@ For a
spatial point process model fitted to spatial point pattern data, we develop
diagnostics for model validation, analogous to the classical measures of
leverage and influence and residual plots in a generalized linear model. The
diagnostics can be characterised as derivatives of basic functional of the
model. They can also be derived heuristically (and computed in practice) as
the limits of classical diagnostics under increasingly fine discretizations
of the spatial domain. We apply the diagnostics to example datasets where
there are concerns about model validity. -3 u‰‰ŽÒF “‡’J Œ’ˆê˜Y i“Œv”—Œ¤‹†Š ƒf[ƒ^‰ÈŠwŒ¤‹†ŒnEy‹³Žöj “ú@ŽžF@2012”N 7ŒŽ
6“ú(‹à)
16F30 -17F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Inferring parameters in inhomogeneous Neyman-Scott
processes using the Palm likelihood AbstractF@ Plant
populations often exhibit spatially clustering distributions, in which the
two processes, limited seed dispersal and limited safe sites, are primary
mechanisms. The inhomogeneous Neyman-Scott process can combine and model
these two ecological processes. Estimating the model parameters allows
evaluation of the relative effects of dispersal and safe sites
retrospectively from spatial individual distribution data along environmental
gradients. Here we propose a likelihood-based method for this spatial point
process by extending the recently developed method, the Palm likelihood. Our
approach was applied to even-aged black spruce forests in Canada. We obtained
a set of model parameters that well reproduced the observed spatial patterns,
and the fitted point processes predicted the reassembly pathway of the boreal
forests. ‘æ27‰ñ
u‰‰ŽÒF Dr. Ó ’·ŸiJiang, Changshengj i’†‘’nk‹Ç’n‹…•¨—Œ¤‹†Š E•›Œ¤‹†ˆõ(y‹³Žö)j “ú@ŽžF@2012”N 6ŒŽ
19“ú(‰Î) ê@ŠF@“Œv”—Œ¤‹†Š@4ŠKƒ‰ƒEƒ“ƒW TitleF@Background sesismicty and its
application in the study of Accelerating Moment release (AMR) and Pattern
Informatics (PI) method
iChangsheng
Jiang, Zhongliang Wu and Jiancang Zhuangj AbstractF@ ‘æ26‰ñ
u‰‰ŽÒF Dr. Chan, Chung-Han iDepartment
of Geosciences, National Taiwan University E
Postdoctoral Fellowj “ú@ŽžF@2012”N 5ŒŽ
29“ú(‰Î) 16F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Short-term earthquake forecasting
through a smoothing Kernel and the rate-and-state friction lawF Application to Taiwan and the Kanto
region, Japan AbstractF@ An
earthquake forecasting approach was employed for estimating the
spatio-temporal distribution of seismicity density in Taiwan and the Kanto
region, Japan. To evaluate long-term seismicity rate, a smoothing Kernel
function based on the distribution of past earthquakes was proposed. With the
use of the rate-and-state friction model, short-term rate evolution according
to the fault-interaction stress disturbance was forecasted. To test
feasibility of this model, it was applied using a catalog for the area
surrounding Taiwan. It leads to good agreement between the model forecast and
actual observations to prove its forecasting accuracy. To check its
stability, we estimated the deviation of the models according to different
parameters used in the approach. We conclude that deviations within each
parameter had an insignificant impact on forecasting stability. For the
application to the Kanto region, we proposed a 3D forecasting model due to
its complex tectonic setting. The seismicity patterns at various depths are
illustrated and the seismicity rate in the crust and along the subduction
zones can be distinguished. The high seismicity rate offshore in the east at
the depth of 20-50 km can be associated with stress increase imparted by the
2011 Tohoku sequence. This phenomenon can be forecasted according to the
rate-and-state friction model. The proposed approach, with verified applicability
for seismicity forecasts, could be useful for seismic hazard mitigation. The
application could provide a warning before the occurrence of consequent
earthquakes and would be valuable for consequent studies, e.g., probabilistic
seismic hazard assessment. ‘æ25‰ñ
u‰‰ŽÒF Dr. Guillas, Serge iDepartment
of Statistical Science, University College London E Readerj
“ú@ŽžF@2012”N 5ŒŽ
18“ú(‹à) 16F00
-18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Earthquake occurrenceF emulation and climate forcing AbstractF@ In
earthquake occurrence studies, the so-called q value can be considered both
as one of the parameters describing the distribution of interevent times and
as an index of non-extensivity. Using simulated datasets, we compare four
estimators, based on principle of maximum entropy, method of moments, maximum
likelihood, and probability weighted moments (PMW) of the parameters of the
distribution of inter-events times, assumed to be a generalized Pareto
distribution. We then use the unbiased version of PWM estimators to compute
the q value for the distribution of inter-event times in a realistic
earthquake catalogue simulated according to the epidemic type aftershock
sequence (ETAS) model. Finally, we use these findings to build a statistical
emulator of the q values of ETAS model. We employ treed Gaussian processes to
obtain partitions of the parameter space so that the resulting model respects
sharp changes in physical behaviour. The emulator is used to understand the
joint effects of input parameters on the q value, exploring the relationship
between ETAS model formulation and distribution of inter-event times. We
then present statistical evidence for a temporal link between variations in
the El Ni¬o¬Southern
Oscillation (ENSO) and the occurrence of earthquakes on the East Pacific Rise
(EPR). We adopt a zero-inflated Poisson regression model to represent the
relationship between the number of earthquakes in the Easter microplate on
the EPR and ENSO (expressed using the southern oscillation index (SOI) for
east Pacific sea-level pressure anomalies) from February 1973 to February
2009. We also examine the relationship between the numbers of earthquakes and
sea levels, as retrieved by Topex/Poseidon from October 1992 to July 2002. We
observe a significant positive influence of SOI on seismicityF
positive SOI values trigger more earthquakes over the following 2 to 6 months
than negative SOI values. There is a significant negative influence of
absolute sea levels on seismicity (at 6 months lag). We propose that
increased seismicity is associated with ENSO-driven sea-surface gradients
(rising from east to west) in the equatorial Pacific, leading to a reduction
in ocean-bottom pressure over the EPR by a few kilopascal. This relationship
is opposite to reservoir-triggered seismicity and suggests that EPR fault
activity may be triggered by plate flexure associated with the reduced
pressure. ‘æ24‰ñ
u‰‰ŽÒF Ž›ì ŽõŽq i–¼ŒÃ‰®‘åŠw@ŠÂ‹«ŠwŒ¤‹†‰È@•‘®’nk‰ÎŽRŒ¤‹†ƒZƒ“ƒ^[E•‹³j “ú@ŽžF@2012”N 5ŒŽ
11“ú(‹à) 14F00-17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 TitleF@Temporal Changes in Fault Strength and
Pore Fluid Pressures Following the 2011 off the Pacific Coast of Tohoku
Earthquake AbstractF@ Extensive
aftershocks and triggered seismic events are ubiquitous following large
earthquakes, but the controlling mechanisms are not yet understood. Focal
mechanisms of these events can provide insight into physical triggering
mechanisms because they reflect friction coefficient and pore fluid pressure
on the fault as well as the tectonic stress pattern. In the present study we
investigate physical processes triggering seismic events in inland region of
the northeast Japan following the 2011 off the Pacific Coast of Tohoku
earthquake (Mw = 9.0) by examining focal mechanisms to the tectonic stress
pattern and changes in the Coulomb failure function (DCFF). The local
excitation of seismicity rate in the regions with negative DCFF indicates
that these aftershocks would have been triggered by decrease of fault
strength due to the increase of pore fluid pressures. We show a plausible
explanation that seismic events on unfavourably oriented pre-existing faults
relative to the tectonic stress pattern, or focal mechanisms inconsistent
with the tectonic stress pattern, would be evidence for drastic decrease of
fault strength due to increase of pore fluid pressures. ‘æ23‰ñ
u‰‰ŽÒF Prof. Khmaladze, Estate V. iSchool of Mathematics, Statistics and
Operations Research, Victoria University of Wellington, New Zealand.EProfessorj “ú@ŽžF@2011”N12ŒŽ12“ú(ŒŽ)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312-AƒZƒ~ƒi[Žº3 (—§ì) TitleF@Infinitesimal analysis of set-valued
functions and applications to spacial statistics and image analysis AbstractF@¨PDF The
problem under consideration started with a study of spacial change-point
problem, or change-set problem as we called itF suppose there
is a set $K$, such that our observations inside this set behave differently
then anywhere outside it. Given $n$ observations, how can we test that a
given $K$ is the correct one against its small perturbation $K(\varepsilon)$
as an alternative? In huge amount of publications on the change-set
problem, almost all devoted to the estimation of the change-set, we could not
find such an object as an "alternative set" $K(\varepsilon)$. One
reason for this porbably is that it is not easy to realize how to describe
small changes in sets. We
developed an appropriate notion of the derivative of set-valued function in (Khmaladze, 2007) and used it to
build a version of contiguity theory in (Einmahl, Khmaladze, 2011) for
statistical problems where the parameter of interest is a set. One
single result here is that if $\Phi_n$ is a point process with increasing
intenstiy $n$, and the symmetric difference $K(\varepsilon)\Delta K$ shrinks
and "vanishes" as $\varepsilon \to 0$, then the sequence
$\Phi_n(K(\varepsilon)\Delta K)$ lives in the limit on the derivative set
$dK(\varepsilon)/d\varepsilon$F $$\Phi_n(
K(\varepsilon)\Delta K) \to \Psi (dK(\varepsilon)/d\varepsilon, {\rm as}
n\to\infty, \varepsilon\to 0.$$ In
the talk we present the main framework of this approach. We hope that some
discussions would lead to further applications. ‘æ22‰ñ
u‰‰ŽÒF Šâ“c ‹MŽ÷ i“Œv”—Œ¤‹†Š
—\‘ª”Œ©í—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓Oƒ‹[ƒvE“Á”Cy‹³Žöj “ú@ŽžF@2011”N 11ŒŽ
25“ú(‹à) 13F30-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@The slip distribution of the 2011
Tohoku-oki earthquake inferred from the spatial distribution of its
aftershocks AbstractF@ We
have developed a method to estimate the spatial slip distribution of a large earthquake
based on its on-fault aftershock activity and the rate- and state-dependent
friction model (Dieterich, 1994, JGR). This talk will represents the
application of this method to the 2011 Tohoku-oki earthquake. The outline of
the method is as follows. First, we divided the source area of the
destructive earthquake into 450 subfaults of which size is 20 x 20 km. Next
the slips of each subfault were optimized to make the expected spatial
distribution of aftershocks fit to the observed distribution during one day
after the mainshock. The expected distribution was derived from the
Dieterichfs formulation and the goodness-of-fit between the expected and
observed distributions was evaluated by means of the likelihood for
point-process model. Then we constructed a Bayesian model incorporating
smoothness constraint on the spatial slip distribution, because optimizing
such a huge number of parameters is unstable; the incorporation of the
constraint enhances the stability of the optimization. To compute the posterior
distribution of the parameters in this Bayesian framework, the Markov Chain
Monte Carlo method was used. The result of this approach found an asperity
close to the hypocenter and some small asperities located to the southwest of
the hypocenter. This feature is consistent with the results of some slip
inversions based on seismograph, geodetic, and/or tsunami data, suggesting
that seismic activity data would play an important role in the estimation of
rupture process of an earthquake. ‘æ21‰ñ -1 u‰‰ŽÒF Prof. Schorlemmer, Danijel iGFZ German Research Centre for
Geosciences, Potsdam, Germany E
Professorj “ú@ŽžF@2011”N10ŒŽ18“ú(‰Î)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312-AƒZƒ~ƒi[Žº3 (—§ì)
TitleF@Advancements in Probabilistic Seismic
Network Completeness Studies AbstractF@ An
important characteristic of any seismic network is its detection
completeness, which should be considered a function of space and time. Many
researchers rely on robust estimates of detection completeness, especially
when investigating statistical parameters of earthquake occurrence. We
present the Probability-based Magnitude of Completeness (PMC) method for
computing the spatial variation and temporal evolution of detection
capability of seismic networks based on empirical data onlyF
phase data, station information, and the network specific attenuation
relation. New developments are extending this method to complex 3D structures
like mining environments. We
present studies of regional networks from California, Switzerland, Italy,
Japan, New Zealand, and compare the result with estimated completeness levels
of other methods. We report on the time evolution of monitoring completeness
in these regions. Scenario computations show the impact of different possible
network failures and offer estimates of possible network optimization
strategies. Optimizations are reducing the necessary processor time for
computing. All presented results are published on the CompletenessWeb (www.completenessweb.org) from which
the user can download completeness data from all investigated regions,
software codes for reproducing the results, and publication-ready and
customizable figures. -2 u‰‰ŽÒF Prof. Rundle, John B. iDepartment
of Physics and Geology, University of California, Davis, U.S.A.EDistinguished Professorj “ú@ŽžF@2011”N10ŒŽ18“ú(‰Î)
16F00 -18F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312-AƒZƒ~ƒi[Žº3 (—§ì)
TitleF@Forecasting Large EarthquakesF Problems, Pitfalls and Promise AbstractF@ Forecasting
the future behavior of a stochastic complex system is a necessary and
critical activity with wide applications. As the mathematician Edward Thorp
showed many years ago [1,2], forecasting has applications in games of chance
as well as in financial markets.
Both fields represent applications of statistics, stochastic random
walks, and probability theory.
Objective evaluation of forecasts by established tests and measures is
also a necessary and important component of a forecasting system. Many of the modern tests have been
tabulated at [3]. Earthquake
forecasts are a special case of the forecasting problem, particularly as
applied to large earthquakes such as the March 11, M9 Off the Pacific Coast of
Tohoku earthquake. Forecasts of
future events in complex systems are in general plagued by incomplete
information, a problem that must be considered in constructing forecasts. In addition to these problems,
delivery of real-time forecast information to the scientific community and to
the public is an issue as well.
Here, web 2.0 technology is helpful in allowing rapid dissemination of
information. In this lecture, I
shall discuss these general aspects of the forecasting problem as applied to
earthquake forecasting. I will
discuss ideas based on the Natural Time Weibull method of earthquake
forecasting, recently developed by our group. I will also discuss our experiences
with numerical earthquake simulations, as well as with public outreach using
the World Wide Web (see www.openhazards.com). [1]
E. Thorp and S. Kassouf, Beat the MarketF A Scientific
Stock Market System, Random House
(1967) [2]
E. Thorp, Beat the DealerF A Winning Strategy for the Game of
Twenty-One, Random House (1962) [3]
httpF//www.cawcr.gov.au/projects/verification/ ‘æ20‰ñ
u‰‰ŽÒF Dr. Kagan, Yan Y. iDepartment
Earth and Space Sciences (ESS), UCLA, U.S.A.EResearcherj “ú@ŽžF@2011”N9ŒŽ2“ú(‹à)
16F00 -17F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312-BƒZƒ~ƒi[Žº4 (—§ì) TitleF@Statistical properties of earthquake occurrence
and their application for earthquake forecasting AbstractF@ Earthquake
occurrence exhibits scale-invariant statistical propertiesF (a) Earthquake
size distribution is a power-law (the Gutenberg-Richter relation for
magnitudes or the Pareto distribution for seismic moment). Preservation of
energy principle requires that the distribution should be limited on the high
side; thus we use the generalized gamma or tapered Pareto distribution. The
observational value of the distribution index is about 0.65. However, it can
be shown that empirical evaluation is upward biased, and the index of 1/2,
predicted by theoretical arguments, is likely to be its proper value. The
corner (maximum) moment has an universal value for shallow earthquakes
occurring in subduction zones. We also determined the corner moment values
for 8 other tectonic zones. (b) Earthquake
occurrence has a power-law temporal decay of the rate of the aftershock and
foreshock occurrence (Omori's law), with the index 0.5 for shallow
earthquakes. The short-term clustering of large earthquakes is followed by a
transition to the Poisson occurrence rate. In the subduction zones this
transition occurs (depending on the deformation rate) after 7-15 years,
whereas in active continents or plate-interiors the transition occurs after
decades or even centuries. (c)
The spatial distribution of earthquakes is fractalF the
correlation dimension of earthquake hypocenters is equal to 2.2 for shallow
earthquakes. (d)
The stochastic 3-D disorientation of earthquake focal mechanisms is
approximated by the rotational Cauchy distribution. On
the basis of our statistical studies, since 1977 we have developed
statistical short- and long-term earthquake forecasts to predict earthquake
rate per unit area, time, and magnitude. The forecasts are based on smoothed
maps of past seismicity and assume spatial and temporal clustering. Our
recent program forecasts earthquakes on a 0.1 degree grid for a global region
90N--90S latitude. For this purpose we use the PDE catalog that reports many
smaller quakes (M>=5.0). For the long-term forecast we test two types of
smoothing kernels based on the power-law and on the spherical Fisher
distribution. We employ adaptive kernel smoothing which improves our forecast
in seismically quiet areas. Our forecasts can be tested within a relatively
short time period since smaller events occur with greater frequency. The
forecast efficiency can be measured by likelihood scores expressed as the
average probability gains per earthquake compared to the spatially or
temporally uniform Poisson distribution. The other method uses the error
diagram to display the forecasted point density and the point events. As an
illustration of our methods, we are trying to answer a questionF was
the Tohoku mega-earthquake of 2011/3/11 a surprise? We consider three issues
related to the 2011 Tohoku earthquakeF (1)
Why was the magnitude limit for the Tohoku region so badly underestimated,
and how can we estimate realistic limits for subduction zones in general? (2)
How frequently can such large events occur off Tohoku? (3)
Could short-term forecasts have offered effective guidance for emergency
preparation? Two
methods can be applied to estimate the maximum earthquake size in any regionF
statistical analysis of available earthquake records, and the moment
conservation principle -- how earthquakes release tectonic deformation. We
have developed both methods since 1991. For subduction zones, the seismic
record is usually insufficient (in fact it failed badly for Tohoku), because
the largest earthquakes are so rare. However, the moment conservation
principle yields consistent estimates for all subduction zones. Various
measurements imply maximum moment magnitudes of the order 9.0--9.7. A
comparison of the inter-earthquake secular strain accumulation and its
release by the coseismic slip implies a similar maximum earthquake size
estimate. Beginning in 1999 we
used our statistical short- and long-term earthquake forecasts, based on the
GCMT catalog, for the western Pacific, including Japan. We have posted them
on the web and included expected focal mechanisms as well. Long-term
forecasts indicate that the average frequency for magnitude 9 earthquakes in
the Tohoku area is about 1/400 years. We have archived several forecasts made
before and after the Tohoku earthquake. As expected, the Tohoku
mega-earthquake changed the forecasted long-term rate by just a few percent.
However, the magnitude 7.5 foreshock increased the short term rate to about
100 times the long-term rate, and the magnitude 9 event increased it briefly
to more than 1000 times the long-term rate. These results could well justify
the development of an operational earthquake forecasting plan. ‘æ19‰ñ “ú@ŽžF@2011”N3ŒŽ4“ú(‹à)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 (—§ì) -1 u‰‰ŽÒF Dr. Talbi, Abdelhak i“Œ‹ž‘åŠw’nkŒ¤‹†Š
EŠO‘lŒ¤‹†ˆõj TitleF@Analysis of Earthquake Inter-event
Times AbstractF@ Understanding
temporal behavior of earthquakes is a fundamental step towards building
reliable statistical model fitting observed seismicity. A successful class of
models assume two seismicity components corresponding to stable backgroundEand
varying triggeredErates or inter-event times. In this
study, the distribution of inter-event times is modeled assuming triggered
events governed by a non-homogenous Poisson process, and background events
governed by different hypothetical distribution (Exponential, Gamma and
Weibull). The model is analytically introduced using Palm-Khinchine equations
and fitted in practice to seismicity data from southern California, Japan and
Turkey. The analytic form of the distribution is discussed when different
priory hypotheses are adopted.In a second step, the temporal clustering of
events is studied using the distance between the whole distribution of
inter-event times, and the residual distributions obtained using different
declustering approaches. Short and long range correlations are studied in
space and time. The residual background process is found dominant around the
mean inter-event time and the mean inter-event distance. The former analysis
describes seismicity as the accumulation of local perturbations related to a
unique mean field backgroundEprocesses characterized by the mean
inter-event time and the mean inter-distance. -2 u‰‰ŽÒF Prof. Console, Rodolfo iNational
Institute of Geophysics and Volcanology, Rome, Italy E Senior Scientific Advisorj TitleF@Short-term earthquake forecasting
before and during the L'Aquila
(Central Italy) seismic sequence of April 2009 AbstractF@ The
M5.9 earthquake occurred on April 6th 2009, which caused more than 300
casualties in the city of L'Aquila and neighboring villages in Central Italy,
immediately generated a lot of discussions about the potential practical use
of foreshocks and other kind of information for mitigating seismic risk among
the population. These discussions triggered studies related to the validity
of statistical clustering models such as the ETAS model, not only for
forecasting aftershocks, but also mainshocks following potential foreshocks.
In the frame of the above mentioned studies, this presentation reports
preliminary results of the statistical analysis of the L'Aquila
seismic sequence by means of a version of the group of ETAS models. The free
parameters used in the algorithm are obtained through the maximum likelihood
method from a learning data set of instrumental seismicity collected from
2005 up to March 2009 in the region of L'Aquila.
Our method includes statistical declustering of the background seismicity by
an iterative process until the maximum likelihood of the learning data set
under the ETAS model is obtained. For testing purposes, an algorithm for
producing simulations of seismic series has been developed and applied to
produce synthetic catalogues, the statistical properties of which are
compared with those of the real one. Finally, the daily forecasts of
earthquakes at different threshold magnitudes were produced for a testing
period including the L'Aquila 2009 mainshocks and its largest aftershocks.
The results show that the probability of occurrence of an M5.9 computed from
the ETAS algorithm at the midnight preceding the L'Aquila
2009 mainshock, even if it was much higher than the background Poisson
probability, was quite low if compared with reasonable expectations for a
practical operational forecast. Moreover, the comparison between the daily
rate expected by the ETAS forecast method, and the real daily number of
aftershocks shows a systematic underestimation of such rate. ‘æ18‰ñ u‰‰ŽÒF Dr. Falcone, Giuseppe iNational
Institute of Geophysics and Volcanology Rome, Italy E Researcherj “ú@ŽžF@2011”N2ŒŽ17“ú(–Ø)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313ƒZƒ~ƒi[Žº5 (—§ì) TitleF@Earthquake occurrence models and their
validation AbstractF@ This
presentation describes the tests and the forecast verification procedures of
three earthquake occurrence models applied to the various regions of the
globe (Italy, California, Japan) to assess the occurrence probabilities of
future earthquakesF two as short-term (24 hour) models, and
one as long-term (5 and 10 years). The first model for short-term forecasts
is a purely stochastic epidemic type earthquake sequence (ETES) model. The
second short-term model is an epidemic rate-state (ERS) forecast based on a
model that is physically constrained by the application to the earthquake
clustering of the Dieterich rate-state constitutive law. The third forecast
is based on a long-term stress transfer (LTST) model that considers the
perturbations of earthquake probability for interacting faults by static Coulomb
stress changes. The forecast verification procedures have been carried out in
forward-retrospective and in real time way making use of statistical tools as
the Relative Operating Characteristics (ROC) diagrams, Log-likelihood,
N-Test, L-Test and Observed and forecasted number of events. The seismic
hazard modeling approach so developed, after a suitable period of testing and
refinement, is expected to provide a useful contribution to earthquake hazard
assessment, even with a possible practical application for decision making
and public information. These
models have been submitted to the Collaboratory for the Study of Earthquake
Predictability (CSEP) for forecast testing for Italy (ETH Zurich) and Japan
(ERI Tokyo). ‘æ17‰ñ u‰‰ŽÒF Dr. ’ ™Á瑄iChen, Kate Huihsuanj i‘—§‘ä˜pŽt”Í‘åŠw E•‹³j “ú@ŽžF@2011”N1ŒŽ28“ú(‹à)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D313ƒZƒ~ƒi[Žº5 (—§ì) TitleF@Triggering effect of small to large earthquakes
on earthquake cycle of small repeating events AbstractF@ Knowledge
of what governs the timing of repeating earthquakes is essential to
understanding the nature of the earthquake cycle and to determining
earthquake hazard, yet the variability and controls of earthquake recurrences
are not well established. Several unsolved problems regarding the recurrence
properties of natural earthquake sequences remainF How
do the repeating sequences respond to static and dynamic stress perturbations
associated with nearby earthquakes? To what degree does fault interaction
influence the timing of repeating earthquakes? Do spatially adjacent
repeating sequences communicate with each other in a way that is clearly
evident in similar occurrence times or recurrence patterns? Here
we use a large population of small, characteristically repeating earthquakes
at Parkfield provides to study how the interaction of nearby earthquakes
affects their recurrence properties. We analyze 114 M -0.4 ~ 3.0 repeating
earthquake sequences (RES) to examine the triggering effect from nearby
microseismicity. We find that close-by-events influence RESfs timing in a
matter of minutes or hours by short-term triggering. Events that occurred
within less than 1 day of an RES often imposed or experienced high stress
changes. A stress increment of 10 kPa appears to be needed to produce such
effectively immediate triggering. ‘æ16‰ñ u‰‰ŽÒF Prof. Console, Rodolfo iNational
Institute of Geophysics and Volcanology Rome, Italy E Senior Scientific Advisorj “ú@ŽžF@2011”N1ŒŽ17“ú(ŒŽ)
15F00 -16F00 ê@ŠF@“Œv”—Œ¤‹†Š@4F ƒ‰ƒEƒ“ƒW (—§ì) TitleF@Renewal modeling and co-seismic stress
transfer for seismic hazard assessment in the Corinth Gulf, Greece, fault
system AbstractF@ Earthquake
forecasts have always been a difficult task because they can be affected by
uncertainty in terms of the most appropriate model and the involved parameter
values. The models adopted in this study belongs to the category of the
renewal models, based on the characteristic earthquake hypothesis, the
necessary ingredients of which are a fixed geometry and the knowledge of the
slip rate on the faults. Both the BPT and the Weibull distribution have been
tested. The hazard rate so obtained is then modified by the inclusion of a
permanent effect due to the Coulomb static stress change caused by failure of
neighboring faults that occurred since the latest characteristic earthquake
on the concerned fault. I apply this method along the Corinth gulf extension
zone, a place that is rich with observations of strong-earthquake recurrence
behavior, to assess their relative forecast applicability. The validity of
the renewal models is assessed in retrospective way on the data of the last
300 years by comparison with a plain time independent Poisson model. This is
done by means of statistical tools as the ROC diagram, the R-score and the
log-likelihood ratio. I find that the renewal models perform better than the
Poisson hypothesis. It seems also that the BPT distribution works slightly
better than the Weibull distribution, while little advantage is achieved by
the introduction of the Coulomb static stress change in the forecasting
algorithm. ‘æ15‰ñ u‰‰ŽÒF Dr. Parsons, Tom iUnited
States Geological Survey, U.S.A. E
Research Geophysicistj “ú@ŽžF@2011”N1ŒŽ4“ú(‰Î)
14F00 -15F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@What causes aftershocks? AbstractF@ Despite
decades of research devoted to this question, we still do not know how
mainshocks cause aftershocks. In this presentation I show recent research
that attempts to isolate static and dynamic stressing signals so that we can
learn more about their abilities to trigger other earthquakes. We hope
someday to combine physical models with statistical models for forecasting,
but I show that the physical models still have problems when used
prospectively. ‘æ14‰ñ u‰‰ŽÒF ‰““c WŽŸ i‹ž“s‘åŠw–hÐŒ¤‹†Š ’nk—\’mŒ¤‹†ƒZƒ“ƒ^[Ey‹³Žö “ú@ŽžF@2010”N11ŒŽ17“ú(…)
13F30-14F30 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@Rate/state Coulomb stress transfer
model for the CSEP Japan seismicity forecast AbstractF@ Numerous
studies retrospectively found that seismicity rate jumps (drops) due to
coseismic Coulomb stress increase (decrease). The Collaboratory for the Study
of Earthquake Predictability (CSEP) instead provides us an opportunity for
prospective testing of the Coulomb hypothesis. Here we adapt our stress
transfer model incorporating rate and state dependent friction law to the
CSEP Japan seismicity forecast. We demonstrate how to compute the forecast
rates of large shocks in 2009 using the large earthquakes during the past 120
years. The time dependent impact of the coseismic stress perturbations
explains qualitatively well the occurrence of the recent moderate size
shocks. Such ability is partly similar to that of statistical earthquake
clustering models. However, our model differs from
them as followsF the off-fault aftershock zones can be
simulated using finite fault sources; the regional areal patterns of
triggered seismicity are modified by the dominant mechanisms of the potential
sources; the imparted stresses due to large earthquakes produce stress
shadows that lead to a reduction of the forecasted number of earthquakes.
Although the model relies on several unknown parameters, it is the first
physics based model submitted to the CSEP Japan test center and has the
potential to be tuned for short-term earthquake forecasts. ‘æ13‰ñ u‰‰ŽÒF Dr. Bansal, Abhey Ram iNational
Geophysical Research Institute, Council of Scientific and Industrial Research
(NGRI, CSIR) (India)E “ú@ŽžF@2010”N
7ŒŽ16“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@Statistical seismology of SumatraF before and after the mega-earthquake
of 26 December 2004 AbstractF@ We
examine the effect of the Sumatra-Andaman Islands mega-earthquake of 26
December 2004 on Sumatran seismicity. Irrespective of time window the
inter-event time distribution exhibits Omori-Utsu decay at short time
intervals, followed by a gamma distribution at intermediate and long time
intervals. The mean inter-event
time drops significantly after the event, associated with an increase in the
fractal dimension of the time series sequence from 0.07 } 0.01
to 0.19 }
0.01, and a doubling of the coefficient of variation of inter-event time.
Epidemic Type Aftershock Sequence (ETAS) model parameters are used to
quantify changes in the statistical properties. These changes can be
significantF the
background random seismicity rate E drops to be
indistinguishable from zero after the mega-earthquake (indicating a single
dominant aftershock sequence) whereas the productivity factor K is ~13.4
times higher. The magnitude sensitivity parameter Eincreases
after the earthquake, indicating larger aftershocks are proportionately
easier to trigger, and the Omori-Utsu exponent p decreases (to ~1),
consistent with a longer duration of aftershock sequences. When examined in
retrospect, a non-stationary ETAS model provides a better fit to the
seismicity data prior to the main shock, the best fitting model being one
with a change point to enhanced rates 4.3 years before the mega-earthquake,
most likely due to a localised swarm at that time. “Á•ʃZƒ~ƒi[ u‰‰ŽÒF Prof. Vere-Jones, David
iStatistics
Research Associates Limited (SRA), New Zealand EDirectorj “ú@ŽžF@2010”N
6ŒŽ 30“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@The Evolution of Statistical
Seismology AbstractF@ Some
account is given of the early days of statistical seismology, and some issues
raised for future consideration in this field. ‘æ12‰ñ u‰‰ŽÒF ›’J Ÿ‘¥ i“Œv”—Œ¤‹†Š
—\‘ª”Œ©í—ªŒ¤‹†ƒZƒ“ƒ^[ ’nk—\‘ª‰ð̓Oƒ‹[ƒvE“Á”CŒ¤‹†ˆõj “ú@ŽžF@2010”N
4ŒŽ 23“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@Coseismic change and recovery of
scattering environment in the crust after a large earthquake (Sugaya,
K., Hiramatsu, Y., Furumoto, M. and Katao, H.) AbstractF@ The
coda waves mainly consist of scattered S waves in the crust. The attenuation
property of coda waves, coda Q^-1 or Qc^-1, reflects the scattering and
absorption environments in the crust and is supposed to be a good tool to
investigate medium properties in the crust. Furthermore, Qc^-1 is a good
indicator of the stress condition in the crust. We
observe a unique temporal variation in crustal heterogeneity from the
analysis of Qc^-1 for 14 years in the Tamba region, northeast to the rupture
zone of the 1995 Hyogo-ken Nanbu earthquake (Mjma 7.3) in southwest Japan.
The values of Qc^-1 at lower frequencies (1.5-4.0 Hz) that increased
coseismically due to the static stress change decreased back to the pre-event
values in about two years. No such variations are found at higher frequencies
(5.0-24.0 Hz). We confirm that no tectonic events that cause a significant
stress change occurred during the recovery period. The time required for the
recovery of the scattering environment, such as the number density of cracks
and cracks opened by the stress change, observed here is consistent with
those of previous studies focused on the brittle shallower crust. This
suggests a possibility that a similar mechanism of the recovery operates both
in the brittle and the ductile parts of the crust. ‘æ11‰ñ u‰‰ŽÒF Prof. Kuensch,
Hans R.iETH
Zurich, Switzerland E Professorj “ú@ŽžF@2010”N
4ŒŽ 9“ú(‹à) 15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312BƒZƒ~ƒi[Žº4 (—§ì) TitleF@Particle and Ensemble Kalman filtering
(Kuensch, Hans R. and Frei, Marco) AbstractF@ Ensemble
and particle filtering are two sequential Monte Carlo methods for
approximating the filter distributions in nonlinear and/or non-Gaussian state
space models. They differ in the way a new observation is taken into account
in the update step. The particle filter is non-parametric and works by
weighting and resampling. The Ensemble Kalman filter (EnsKF) requires a
linear Gaussian observation model and proceeds by a Monte Carlo
implementation of the linear Kalman
filter update. In a different version of the ENSKF called the square root
filter, updates are done by a linear transformation of the prediction sample
(without any additional randomness). Even though the EnsKF is based on
unwarranted assumptions, it is extremely robust in many high dimensional
applications where the number of particles is small because of computational
limitations. In contrast, the particle filter degenerates quickly as the
dimension of the observations increases. I
will discuss some of the following issuesF EnsKF for
nonlinear observation equations, the use of the EnsKF for parameter
estimation, regularization of the estimated prediction covariance, bias due
to estimation of the prediction covariance, localization of the EnsKF update,
behavior of the EnsKF for Gaussian mixture predictions and ideas for
combining the EnsKF and the particle filter. ‘æ10‰ñ u‰‰ŽÒF ‰¤ âQiWang,
Qij iDepartment of Earth and Space Sciences,
University of California, Los Angeles, U.S.A.j “ú@ŽžF@2010”N
2ŒŽ 19“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 (—§ì) TitleF@An optimized five-year large
earthquake forecast in California based on smoothed seismicity (Qi Wang,
D.D.Jackson and Y.Y. Kagan) AbstractF@ Earthquake
forecasting models based on seismic, geologic, tectonic and geodetic
information have been discussed a lot. We tried to state a five year forecast
of California earthquakes with magnitude above 5.0 based on smoothed
seismicity. We used extended sources to represent large earthquakes. Kagan
and Jackson (2007) have presented a five-year forecast of southern California
earthquakes with magnitude 5 or larger. Here we extend the forecast region
from southern California to all of California using the new California
earthquake catalog with estimated moment magnitude, and in one case we
include historic earthquakes as well as instrumentally recorded ones after
considering catalog incompleteness. This forecast model differs from others
like it because larger events are represented by multiple point sources and
because the parameters in the spatial smoothing kernel have been optimized in
learning period by using maximum likelihood estimation. We tried to answer
two basic questionsF (1) whether the anisotropic and
magnitude-dependent smoothing kernels outperform the isotropic and
magnitude-independent ones; (2) whether including large historical
earthquakes could improve the forecast.
Moreover, earthquake data have errors in location, magnitude and focal
mechanism that can influence the results of earthquake studies. Neglecting
these errors, or estimating them poorly, could cause valid hypotheses to be
rejected or invalid ones to be accepted. We tried to estimate how the
uncertainty of catalog data especially uncertainty of magnitude influences
our testable forecast results. ‘æ9‰ñ u‰‰ŽÒF Dr. Terakawa, Toshiko (Ž›ì ŽõŽq)iGeodynamics,
Steinmann-Institute, University of Bonn, Germanyj “ú@ŽžF@2010”N
1ŒŽ 29“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 (—§ì) TitleF@Identification of the high fluid
pressure source driving the 2009 Lfaquila earthquake sequence (Toshiko
Terakawa, Anna Zoporowski, Boris Galvan, and Stephen A. Miller) AbstractF@ The
April 6, 2009 Lfaquila intraplate earthquake (Mw=6.3) in the Central
Apennines occurred at the boundary separating regions of diffuse CO_2
degassing and regions where degassing is not observed. The same tectonic and
geologic environment hosted the 1997 Colfiorito sequence to the north, which
was shown to be driven by degassing of a high-pressure fluid source at depth
with a fluid diffusion model [Miller et al. 2004]. Here we show the 3D fluid
pressure field in the Lfaquila
region, applying a new analysis technique termed Focal Mechanism Tomography
[Terakawa et al. 2009 (submitted)] to actual seismic data. We identify three large-scale pockets
of high fluid pressure at depths of 7-10 km, and show a very strong
correlation between these high fluid pressure regions and both foreshock and
aftershock hypocenters. The shape of over-pressured regions and the evolution
of aftershock locations indicate that aftershocks are being driven in part by
fluid flow associated with volumetric compression from the main-shock acting
upon the over-pressured poro-elastic reservoir [e.g., Nur & Booker 1982;
Bosl & Nur 2002], and by fracturing and subsequent flow from trapped high
pressure pockets [Miller et al. 2004].
The mapped 3D fluid pressure field inferred from our analysis provides
an important boundary condition for forward modelling of fluid flow and
stress evolution for a mechanistic assessment of the continuing seismic
hazard in the region. These results also form a baseline hypothesis against
which other geophysical and geochemical measurements can be tested. ‘æ8‰ñ u‰‰ŽÒF •P–ì “Nl i‹É’nŒ¤‹†Š V—̈æ—Z‡Œ¤‹†ƒZƒ“ƒ^[E“Á”CŒ¤‹†ˆõj “ú@ŽžF@2010”N
1ŒŽ 15“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@A508ƒZƒ~ƒi[Žº (—§ì) TitleF@Time variation of background
seismicity on ETAS model AbstractF@ The
Epidemic Type Aftershock Sequence (ETAS) model introduced by Ogata (1988) is
one of statistical method for seismicity. This model classifies earthquake
sequence into aftershocks and background seismicity. For the frequency of
aftershocks, some empirical relations is assumed. On the other hand,
background seismicity rate is constant. Therefore, the main shock sequence is
distributed according to stationary Poisson process. In this study, we focus
the time variation of background seismicity and expand the background
seismicity. ‘æ7‰ñ u‰‰ŽÒF Dr. Chu, Annie i“Œv”—Œ¤‹†Š —\‘ª”Œ©í—ªŒ¤‹†ƒZƒ“ƒ^[
’nk—\‘ª‰ð̓Oƒ‹[ƒv E
“Á”CŒ¤‹†ˆõj “ú@ŽžF@2009”N 12ŒŽ 4“ú(‹à) 15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312AƒZƒ~ƒi[Žº3 (—§ì) TitleF@Comparison of ETAS parameter estimates
across different global tectonic zones AbstractF@ Branching
point process models such as the ETAS (Epidemic-Type Aftershock Sequence)
models introduced by (Ogata 1988, 1998) are often used in the description,
characterization, simulation, and declustering of modern earthquake catalogs.
The present work investigates how the parameters in these models vary across
different tectonic zones. After considering divisions of the surface of the
Earth into several zones based on the plate boundary model of Bird (2003),
ETAS models are fit to the occurrence times and locations of shallow
earthquakes within each zone. Computationally, the EM-type algorithm of Veen
and Schoenberg (2008) is employed for the purpose of model fitting. The fits and variations in parameter
estimates for distinct zones are compared. Seismological explanations for the
differences between the parameter estimates for the various zones are
considered, and implications for seismic hazard estimation and earthquake
forecasting are discussed. ‘æ6‰ñ u‰‰ŽÒF Prof. Žü@Žd—E
(Zhou, Shiyong) i–k‹ž‘åŠw ’n‹…‹óŠÔ‰ÈŠwŠw‰@ E‹³Žöj “ú@ŽžF@2009”N 10ŒŽ
30“ú(‹à) 15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D304ƒZƒ~ƒi[Žº(—§ì) TitleF@Was the Ms 8.0 Long Men Shan
(Wenchuan) Earthquake in 2008@induced by
the Zipingpu Reservoir? AbstractF@ According
to the Coulomb failure criterion the variation of either shear stress, normal
stress, or pore pressure can result in earthquake occurrences. Abnormal
seismicity increases around reservoirs are often thought to be induced by the
water piled behind the dam, which leads to increases in crustal pore pressure
and Coulomb stress nearby, and so promote the nearby faults to fail. To
investigate how much the Zi-Ping-Pu reservoir, whose dam is just a few
hundreds of meters from the Long Men Shan fault, influenced the May 12, 2008
Wenchuan earthquake Ms8.0, we calculated the Coulomb stress variation induced
by the filling of the Zipingpu reservoir, beginning in December 2004, and
analyzed the correlation between the seismicity variations and the induced
Coulomb stress variations. Both the calculated Coulomb stress variations and
the observed seismicity analysis suggest that the probability that the huge
Long Men Shan earthquake Ms8.0 was induced by the Zipingpu reservoir is very
low. The filling of the Zipingpu reservoir could only result in a small
increase in the rate of shallow earthquakes with hypocenter depth smaller
than 5km near the reservoir region. ‘æ5‰ñ u‰‰ŽÒF –쑺 rˆê i‘‡Œ¤‹†‘åŠw‰@‘åŠw E
‘åŠw‰@¶j “ú@ŽžF@2009”N 10ŒŽ 2“ú(‹à) 15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@D312ƒZƒ~ƒi[Žº(—§ì) TitleF@Composing prior distributions of BPT distribution
with slip rates in renewal process of recurrent earthquakes AbstractF@ Renewal
process with Brownian Passage Time (BPT) distribution is mainly used in the
long-term evaluation of active faults by the Earthquake Research Committee,
the Headquarters of Earthquake Research Promotion (ERC/HERP). In ERC/HERP
(2001), two parameters of BPT distribution, called mean parameter and
aperiodicity (shape) parameter, are estimated as followsF mean
parameter is estimated by either recurrence intervals or slip rates of active
faults, and aperiodicity parameter is commonly set at 0.24. The value of
aperiodicity parameter is derived from the active faults on which more than
four recurrence intervals are detected, but recent researches showed the
value of aperiodicity parameter is inappropriate to apply for all active
faults. In
this seminar, we propose an alternative method to estimate the parameters of
BPT distribution of recurrence intervals. First, we normalize the intervals
by mean recurrence time derived from slip size and slip rate of the fault.
Then we can use both intervals and slip rate to estimate its mean parameter.
Second, we use Bayesian inference to estimate the parameters. Bayesian
predictive distribution gives stable prediction when there are few data in
the fault. Prior distributions and their parameters are estimated by
maximizing their marginal likelihood. We simulate recurrence intervals from a
virtual fault and compare our model and ERC/HERP's model by their relative
entropy. ‘æ4‰ñ u‰‰ŽÒF ŒFàV ‹M—Y i“Œv”—Œ¤‹†Š —\‘ª”Œ©í—ªŒ¤‹†ƒZƒ“ƒ^[
’nk—\‘ª‰ð̓Oƒ‹[ƒv E
ƒŠƒT[ƒ`¥ƒAƒVƒXƒ^ƒ“ƒgj “ú@ŽžF@2009”N
8ŒŽ 28“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@‘æ2ƒZƒ~ƒi[Žº(330†Žº) TitleF@Seismicity changes in Tohoku District
before the 2008 Iwate-Miyagi Inland Earthquake
i Kumazawa,
Takao (The
Graduate University for Advanced Studies), Ogata, Yoshihiko (The
Institute of Statistical Mathematics) and AbstractF@ The
2008 Iwate-Miyagi Nairiku earthquake of M7.2 (JMA) is one of the largest
inland earthquakes occurred in
the northern part of Honshu island. We examined seismicity rate changes in
the surrounding areas prior to this earthquake for around 10 years in
relation to the ∆CFF values of it. We selected a number of regions according
to the sign of the ∆CFF from the Iwate-Miyagi inland earthquake, then
examined the series of earthquakes in each region by fitting ETAS models and
examined if the changes of seismicity, if any, were significant and
consistent with the ∆CFF. We
confirmed the pre-seismicity by transient anomalies in GPS measurement of
several stations. Five
years prior to the event, a M7.1 earthquake of 2003 occurred at 71km. depth
in the subducting Pacific plate, beneath Sanriku coast, Miyagi
prefecture. This faultfs movement
activated the reverse fault system in inland Tohoku District including the
one responsible for Iwate-Miyagi inland earthquake, raising the seismicity
there. The activated area also covered the fault of M6ǒ5Ņinland earthquake
occurred two months afterward. It as well likely contributed to the
activation. Here
we hypothesize these two large earthquake in 2003 enhanced the precursory slow
slip in the fault of our interest, including its deeper extension, and
explain the observed pre-seismic patterns. This hypothesis is in concordance
with the GPS location measurement from several stations on and around the
fault. The rate of distance shrinkage between the two stations, one being
right on the fault, can only be well explained by assuming that the active
site moved deeper at some point. We
fitted ETAS model to the observed seismic activity in each selected area and
see if there are meaningful changing points of time, across which the
seismicity being changed significantly in terms of AIC. The search for the
changing points were done first by dividing the whole period either at points
of known events, or at suspicious points with some systematical shifting of
them, then to each divided period ETAS was refitted. Among these combined
ETAS models, the best performed model was compared with the single ETAS
without divisions and examined if its improvement was significant. We finally checked these changing
points are consistent with ∆CFFs. ‘æ3‰ñ u‰‰ŽÒF ¼‰Y [G i“Œv”—Œ¤‹†Š —\‘ª”Œ©í—ªŒ¤‹†ƒZƒ“ƒ^[
’nk—\‘ª‰ð̓Oƒ‹[ƒv E
“Á”C‹³Žöj “ú@ŽžF@2009”N
8ŒŽ 14“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@‘æ2ƒZƒ~ƒi[Žº(330†Žº) TitleF@Inversion of seismic and geodetic data
to estimate tectonic stress fields in the Earth's crust AbstractF@ The
Earth's crust is macroscopically treated as a linear elastic body, but it
includes a number of defects. The occurrence of inelastic deformation such as
brittle fracture at the defects, which can be generally represented by moment
tensor, brings about elastic deformation in the surrounding regions. Since
the moment tensor is mathematically equivalent to the volume integral of
stress release over the whole elastic region surrounding the source, we can
quantitatively relate the centroid moment tensor (CMT) solutions of seismic
events with a true but unknown tectonic stress field. On the basis of such an
idea, we developed an inversion method to estimate tectonic stress fields
from CMT data using Akaike's Bayesian information criterion (ABIC) [1]. We
show the 3-D tectonic stress pattern in and around Japan, obtained by
applying the CMT data inversion method to 15,000 seismic events within the
magnitude range of 3.5-5.0 in the period of 1997-2007. The inverted 3-D
stress pattern illuminates the present-day (Quaternary) complex tectonic
motion of Japanese Islands. On the other hand, the crustal deformation
observed through geodetic measurements is the sum of the inelastic
deformation as source and the elastic deformation as effect. Representing the
sources by moment density tensor distribution, we created a theory of
physics-based strain analysis, and developed an inversion method to
separately estimate 3-D elastic and inelastic strain fields from GPS data
[2]. In this method, first, we determine the optimum distribution of moment
density tensor from observed GPS array data by using Akaike's information
criterion (AIC). Converting the optimum moment density tensor distribution
with elastic compliance tensor, we can directly obtain 3-D inelastic strain
fields. Given the optimum moment density tensor distribution, we can
theoretically compute 3-D elastic strain fields. We applied the inversion
method to GPS horizontal velocity data (1996-2000) in the Niigata-Kobe
transformation zone, central Japan, and succeeded in estimating 3-D elastic
and inelastic strain
rate fields separately. ReferencesF 1. Terakawa, T.
and M. Matsu'ura (2008), CMT data inversion using a Bayesian information
criterion to estimate seismogenic stress fields, Geophys. J. Int., 2.
Noda, A. and M. Matsu'ura (2009), Physics-based GPS data inversion to
estimate 3-D elastic and inelastic strain fields, Geophys. J. Int.,
submitted. ‘æ2‰ñ u‰‰ŽÒF Dr. Zhuang, Jiancang
(¯ Œš‘q) i“Œv”—Œ¤‹†Š
ƒ‚ƒfƒŠƒ“ƒOŒ¤‹†ŒnE•‹³j “ú@ŽžF@2009”N
7ŒŽ 31“ú(‹à)
15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@‘æ2ƒZƒ~ƒi[Žº(330†Žº) TitleF@Gambling with reputationF On scoring earthquake
forecasts and predictions AbstractF@ This
seminar presents a new method, namely the gambling score, for scoring the
performance earthquake forecasts or predictions. Unlike most other scoring
procedures that require a regular scheme of forecast and treat each
earthquake equally, regardless their magnitudes, this new scoring method
compensates the risk that the forecaster has taken. Once a forecaster makes a
prediction or forecast, he is assume to have bet some points of his
reputations. The reference model, which plays the role of the house, determines
how many reputations the forecaster can gain if he succeeds, according to a
fair rule, and also takes away the reputations bet by the forecaster if he
loses. From the viewpoints of both the reference model and the forecaster,
the rule for rewarding and punishment is fair. This method is also extended
to the continuous cases of point
process models, where the reputations bet by the forecaster become a
continuous mass on the space-time-magnitude range of interest. We also
calculate the upper bound of the gambling score when the true model is a
renewal process, the stress release model or the ETAS model and when the
reference model is the Poisson model. ‘æ1‰ñ u‰‰ŽÒF Dr. Harte, David iDirector of Statistics Research
Associates Limited (SRA), New Zealandj “ú@ŽžF@2009”N
7ŒŽ 3“ú(‹à) 15F00-16F00 ê@ŠF@“Œv”—Œ¤‹†Š@‘æ2ƒZƒ~ƒi[Žº(330†Žº) TitleF@Using R for Modelling Marked Point
Processes Indexed by Time AbstractF@ A
unified approach will be described for the fitting and analysis of a large
class of point process models. The models discussed will be those that are
indexed by time and contain additional marks. These include many of the point
process models used for describing earthquake processes, e.g. stress release
model, ETAS model, spatial ETAS model and the linked stress release model. By
exploiting characteristics of their conditional intensity function, the
log-likelihood function can be reduced to a more simple form. This simplified
structure can also be used to calculate the residual process and perform
simulations. The method involves "decomposing" these models into
more elementary parts, and then utilising these parts in an object orientated
manner within the R programming language. The longer term purpose in such an
approach is to lessen programming effort involved in fitting such models.
This should give us a better ability to more easily make modifications to
existing models, and hence test alternative hypotheses. I will show R
programming code for some fitted models to earthquake data. |
|||
|
|
|||