Koji Kanefuji (The Institute of Statistical Mathematics)
Alan Welsh (Australian National University, Australia)
Pierre R. L. Dutilleul (McGill University, Canada)
Kunio Shimizu (The Institute of Statistical Mathematics)
Kenichiro Shimatani (The Institute of Statistical Mathematics)
Invited Speakers
Song Xi Chen (Peking University, China)
Pierre R. L. Dutilleul (McGill University, Canada)
Petra Kuhnert (CSIRO, Australia)
Subhash Lele (University of Alberta, Canada)
Cleridy Lennert-Cody (Inter-American Tropical Tuna Commission, USA)
Alan Welsh (Australian National University, Australia)
Youngseon Lee (Samsung SDS, KOREA)
Kenichiro Shimatani (The Institute of Statistical Mathematics, JAPAN)
Keiichi Fukaya (The National Institute for Environmental Studies, JAPAN)
Koyomi Nakazawa (Fukuoka Institute of Technology, JAPAN)
Spatio-Temporal Point Patterns, Periodicities and Earthquakes: An Analytical Extension Including the Hypocenter Depth Pierre Dutilleul (McGill University)
This work builds upon results obtained by Dutilleul et al. (2015, Journal of Geophysical Research) for periodicity analysis of earthquake occurrence,
and relates to developments made by Guo et al.
(2015, Geophysical Journal International), concerning a hypocentral version of the space-time ETAS model for earthquake catalog declustering.
The location of an earthquake, through the latitude, longitude and depth of its hypocenter in 3-D space and the date and clock time of rupture, makes it a “point” in a spatio-temporal point pattern,
observed over a given territory and months/years/decades.
The magnitude associated with each earthquake marks the point pattern, as would hypocenter depth do if only the latitude and longitude of epicenters were used for spatial location (in 2-D space).
In this talk, a version of the multifrequential periodogram analysis allowing for missing observations in temporal series explored for periodicities is presented,
and illustrated by hypocenter depth (monthly mean and median) from original and ETAS-declustered (in 2-D versus 3-D space) data catalogs for central California.
A semiannual periodicity is identified and fitted, and results are discussed in relation to periodicities found in time series of monthly earthquake numbers,
the declustering procedure used or not in a preliminary step, and magnitude thresholds.
Estimation for Inhomogeneous Neyman-Scott processes by Bayesian Approach Using Palm Likelihood
Kenichiro Shimatani (The Institute of Statistical Mathematics)
Spatially clustering distributions are commonly seen in plant populations and have been modeled by spatial point processes.
The fundamental spatial point process model is the homogeneous Poisson process.
If plant densities are changing along some environmental gradients, inhomogeneous Poisson process may explain the data.
If clusters are formed daughters limitedly dispersed from each mother, the Neyman-Scott process plays a central role.
In nature, spatial distributions are rarely homogeneous, and we need inhomogeneous Neyman-Scott processes.
The inhomogeneous elements may be present in the parental distribution, survival of daughters, dispersal distances, numbers of daughters produced by mothers.
The inhomogeneous Poison process has an explicitly written likelihood, while homogeneous Neyman-Scott process does not, and the more complex inhomogeneous Neyman-Scott processes either.
The parameter estimation requires some alternative statistical approach.
This study extended the Palm likelihood approach for the homogeneous Neykman-Scott processes to the inhomogeneous cases,
and applied a Bayesian method and the Metropolis-Hastings algorithm for parameter estimation. The methodology was examined by artificial datasets.
Vizumap: An R Package for Visualizing Uncertainty in Spatial Data
Petra M. Kuhnert (CSIRO, Australia),
Lydia R. Lucchesi (University of Washington),
Christopher K. Wikle (University of Missouri)
The quantification, visualization and communication of uncertainty in spatial data is important for decision-making.
It can highlight regions on a map that are poorly predicted and identify a need for further sampling.
Uncertainty can also help to prioritise regions in terms of where to focus remediation efforts and allocate investment.
It can also provide some assurance on where modelling efforts are working well and where it fails to trigger further investigation.
Unfortunately, uncertainty is rarely included on maps that convey spatial estimates.
Approaches for visualizing uncertainty in spatial data will be presented. These include the bivariate choropleth map, map pixelation, glyph rotation and exceedance probability maps.
Bivariate choropleth maps explore the “blending” of two colour schemes, one representing the estimate and a second representing the margin of error. The second approach uses map pixelation to convey uncertainty.
The third approach uses a glyph to represent uncertainty and is what we refer to as glyph rotation.
The final map based exploration of uncertainty is through exceedance probabilities.
We showcase these approaches using the Vizumap R package applied to sediment load estimates in the Great Barrier Reef,
which were developed from a Bayesian Hierarchical Model (BHM) that assimilated estimates of sediment concentration and flow with modelled output from a catchment model developed on the Upper Burdekin Catchment in Queensland, Australia.
Covariance Models for Spatio-temporal Processes on a Regular Grid: Flexible, yet Computationally Simple
Dean Koch (University of Alberta),
Subhash Lele (University of Alberta),
Mark Lewis (University of Alberta)
Mountain pine beetles (MPB) is a major forest pest in North America.
Due to the climate change and other anthropogenic factors, the MPBs are spreading across large regions of Canada, substantially affecting forestry and forestry related economy in British Columbia and Alberta.
Understanding the spread of MPB is important to devise biological control systems. The data on MPB are available on a regular grid across space and time.
Two important impediments to modelling large spatial or spatio-temporal data are (a) Specification of the spatial covariance structure,
(b) Conducting likelihood inference that involves computation of determinant and inversion of large matrices.
We utilize a flexible class of covariance models, called separable covariance models, to model the dependence structure much more flexibly than the commonly used isotropic models.
These models allow us to reduce the computational complexity by several orders of magnitude and at the same time, to increase the model flexibility by allowing geometric and other kinds of anisotropies.
We will discuss the statistical and computational implications of separable covariance models for analyzing large amounts of spatio-temporal Gaussian and non-Gaussian data.
14:50-15:20
▼Human Health Risk Assessment of Mercury Caused from Artisanal Small-scale Gold Mining in Indonesia
Koyomi Nakazawa (Fukuoka Institute of Technology),
Osamu Nagafuchi (Fukuoka Institute of Technology),
Takanobu Inoue (Toyohashi University of Technology),
Tomonori Kawakami (Toyama Prefectural University),
Elvince Rosana (University of Palanka Raya),
Koji Kanefuji (The Institute of Statistical Mathematics),
Kenichi Shinozuka (Fukuoka Institute of Technology)
Human Health Risk Assessment of Mercury Caused from Artisanal Small-scale Gold Mining in Indonesia
Koyomi Nakazawa (Fukuoka Institute of Technology),
Osamu Nagafuchi (Fukuoka Institute of Technology),
Takanobu Inoue (Toyohashi University of Technology),
Tomonori Kawakami (Toyama Prefectural University),
Elvince Rosana (University of Palanka Raya),
Koji Kanefuji (The Institute of Statistical Mathematics),
Kenichi Shinozuka (Fukuoka Institute of Technology)
Artisanal Small-scale Gold Mining (ASGM) activity using mercury amalgamation method has been more active in many developing countries even in these days.
Miners sell their mercury gold amalgam to the gold shops which located in their town.
At the gold shop, the mercury gold amalgam is burned to evaporate the mercury without any equipment system for preventing mercury vapor exposure to the people.
The goal of our study is to clarify the human health risk for mercury exposure around ASGM activity area. We have conducted field measurements in Bengkulu Sumatra, Indonesia.
Gaseous elemental mercury (GEM) concentrations ranged from 4.10 ng/ m3 (ambient air) to 2 million ng/ m3 (inside gold shops) in atmosphere.
Total mercury (T-Hg) concentrations ranged from 5.30 ng/L to 2,490 ng/ L, and from 0.34 mg/kg to 25.6 mg/kg in river water and in sediments and soils, respectively.
In addition, T-Hg concentration in brown rice was 0.044 mg/kg. We used these concentrations to calculate hazard quotients (HQs) by means of a probabilistic risk assessment method.
The results indicated that the gold shop workers and gold refining workers may concern the inhalation risk of mercury vapor.
Human health risk is also concerned originated from contaminated food intake in each group.
15:20-15:40 Coffee Break
Session 3 (15:40-17:30)
Chairperson: Kunio Shimizu (The Institute of Statistical Mathematics)
15:40-16:20
▼The Importance of Environment and Life Stage on Interpretation of Silky Shark Relative Abundance Indices for the Equatorial Pacific Ocean
Cleridy Lennert-Cody (Inter-American Tropical Tuna Commission), Shelley C. Clarke (Food and Agriculture Organization of the United Nations),
Alexandre Aires-da-Silva (Inter-American Tropical Tuna Commission), Mark N. Maunder (Inter-American Tropical Tuna Commission),
Peter J.S. Franks (UC San Diego), Marlon Román (Inter-American Tropical Tuna Commission), Arthur J. Miller (UC San Diego), Mihoko Minami (Keio University)
The Importance of Environment and Life Stage on Interpretation of Silky Shark Relative Abundance Indices for the Equatorial Pacific Ocean
Cleridy Lennert-Cody (Inter-American Tropical Tuna Commission), Shelley C. Clarke (Food and Agriculture Organization of the United Nations),
Alexandre Aires-da-Silva (Inter-American Tropical Tuna Commission), Mark N. Maunder (Inter-American Tropical Tuna Commission),
Peter J.S. Franks (UC San Diego), Marlon Román (Inter-American Tropical Tuna Commission), Arthur J. Miller (UC San Diego), Mihoko Minami (Keio University)
A Pacific-wide approach was taken to study the importance of environment on interpretation of trends in silky shark relative abundance for the equatorial Pacific Ocean.
Shark bycatch data collected by observers aboard purse-seine vessels fishing for tunas were used to estimate standardized trends in relative abundance with zero-inflated negative binomial generalized additive models.
Estimated trends for different silky shark life stages for several regions across the equatorial Pacific were compared to the Pacific Decadal Oscillation (PDO), an index of Pacific Ocean climate variability.
Correlation between silky shark indices and the PDO was found to differ by region and shark life stage.
The highest correlations were for juvenile silky sharks from the western region of the equatorial eastern Pacific (EP) and from the equatorial western Pacific. This correlation disappeared in the inshore EP.
Throughout, correlations with the PDO were generally lower for adult sharks.
These results are suggestive of temporal changes in juvenile shark indices being driven by movement of animals across the Pacific as the eastern edge of the Indo-Pacific Warm Pool shifts location with ENSO events.
Bayesian Curve Fitting for Discontinuous Functions using Overcomplete System with Multiple Kernels
Youngseon Lee (Samsung SDS),
Jaeyong, Lee (Seoul National University),
Shuhei Mano(The Institute of Statistical Mathematics)
We propose a Bayesian model for estimating functions that may have jump discontinuities, and a variational method for model inference.
The proposed model is an extension of the LARK model, which enables functions to be represented by the small number of elements from an overcomplete system is comprised of multiple kernels.
The latent features such as the location of jumps, the number of elements, and the smoothness of functions are automatically determined by the Levy random measure, there is no need for model selection.
An analysis of real data illustrates that the proposed model effectively detects latent features and the performance is better than the standard nonparametric models for the estimation of discontinuous functions.
We also show the suggested variational method significantly reduces the computation time than the conventional Bayesian method (reversible jump Markov chain Monte Carlo) for inference.
Integrating multiple Sources of Ecological Data to Estimate Abundance of a Number of Species at Geographic Scales
Keiichi Fukaya (The National Institute for Environmental Studies)
The species abundance distribution (SAD), represented by the number of individuals per species within an ecological community, is one of the fundamental characteristics of biodiversity.
Despite their obvious significance in ecology and biogeography, there is still no clear understanding of the patterns of species abundance at large spatial scales,
which is mainly due to the limitation of the great survey effort required for obtaining species abundance data.
Thus, such data are generally only obtained from local communities, which is not sufficient to assess SADs at broad spatial scales.
Here, we developed a class of hierarchical models to estimate macroscale SADs.
By integrating less expensive ecological data (i.e. spatially replicated multispecies detection-nondetection observations and data on species geographic distributions),
the model estimates the abundance of each species in discrete geographical units (e.g. grid cells).
We applied the model to a large dataset of woody plant communities comprising more than 40,000 vegetation survey records along with geographical ranges of species from various data sources.
As a result, estimates of absolute abundance of 1,248 species at a 10-km-grid-square resolution over East Asian islands across subtropical to temperate biomes were obtained.
These results highlight the potential of the elucidation of macroscale SADs that have thus far been an inaccessible but critical property of biodiversity.
【26 March】
Session 4 (10:30-11:50)
Chairperson: Shogo Kato (The Institute of Statistical Mathematics)
Meteorological Change and Impacts on Air Pollution -- Results from North China
Ziping Xu (Yuanpei College),
Song Xi Chen (Peking University),
Xiaoqing Wu (Iowa State University)
There are speculations that the severe air pollution experienced in North China were the acts of climate change in general and a decreasing northerly wind in particular.
We first conduct a retrospective analysis on 38 years (1979-2016) reanalyzed meteorological data from ERA-Interim,
an archive of European Centre for Medium-Range Weather Forecasts (ECMWF) to quantify meteorological changes over the 38 years.
Statistically significant changes have been detected in the surface temperature, relative humidity and boundary layer height in the region between the first and the second 19-year periods from 1979 to 2016.
However, there was no significant reduction in the northerly wind within the mixing layer.
We then build regression models of PM2.5 on the meteorological variables using the 2015 and 2016 observations at 32 cities of the study region,
which are used to quantify effects of the meteorological changes between the two 19-years periods on PM2.5.
It is found that the average meteorological changes led to 2% to 7% reduction in monthly PM2.5 averages in most cities.
Hence, the climate change may not be responsible to the air pollution situation in North China.
Using the Bootstrap in Generalized Regression Estimation
James G. Booth (Cornell University), Alan H. Welsh (Australian National University)
We discuss a generalized regression estimation procedure that can lead to much improved estimators of general population characteristics, such as quantiles, variances, and coefficients of variation.
The method is quite general and requires minimal assumptions, the main ones being that the asymptotic joint distribution of the target and auxiliary parameter estimators is multivariate normal, a
nd that the population values of the auxiliary parameters are known.
The assumption on the asymptotic joint distribution implies that the relationship between the estimated target and the estimated auxiliary parameters is approximately linear with coefficients determined by their asymptotic covariance matrix.
Use of the bootstrap to estimate these coefficients avoids the need for parametric distributional assumptions.
First-order correct conditional confidence intervals based on asymptotic normality can be improved upon using quantiles of a conditional double bootstrap approximation to the distribution of the studentized target parameter estimate.
11:50-12:00
Closing Address
Pierre Dutilleul (McGill University)