Earthquakes are regarded as the realization of a point process modeled by a generalized Poisson distribution. We assume that the Gutenberg-Richter law describes the magnitude distribution of all the earthquakes in a sample, with a constant b value. We model the occurrence rate density of earthquakes in space and time as the sum of two terms, one representing the independent, or spontaneous, activity and the other representing the activity induced by previous earthquakes. The first term depends only on space and is modeled by a continuous function of the geometrical coordinates, obtained by smoothing the discrete distribution of the past instrumental seismicity. The second term also depends on time, and it is factorized in two terms that depend on the space distance (according to an isotropic normal distribution) and on the time difference (according to the generalized Omori law), respectively, from the past earthquakes. Knowing the expected rate density, the likelihood of any realization of the process (actually represented by an earthquake catalog) can be computed straightforwardly. This algorithm was used in two ways: (1) during the learning phase, for the maximum likelihood estimate of the few free parameters of the model, and (2) for hypothesis testing. For the learning phase we used the catalog of Italian seismicity (M≥3.5) from May 1976 to December 1998. The model was tested on a new and independent data set (January--December 1999). We demonstrated for this short time period that in the Italian region this time-dependent model has a significantly better performance than a stationary Poisson model, even if its likelihood is computed excluding the obvious component of main shock-aftershock interaction. ¿ 2001 American Geophysical Union |