User talk:Dr Khalid FAHMI

Earthquake Duration Magnitude
Dr Khalid FAHMI (talk) 14:32, 30 April 2010 (UTC)Background

The concept of earthquake DURATION magnitude - originally proposed by Bisztricsany in 1958 using surface waves only - is based on the realization that on a recorded earthquake seismogram the total length of the seismic wavetrain - sometimes referred to as the CODA - reflects its size. Thus larger earthquakes give longer seismograms [as well as stronger seismic waves] than small ones. The seismic wave interval measured on the time axis of an earthquake record - starting with the first seismic wave onset until the wavetrain amplitude diminishes to at least 10% of its maximum recorded value - is referred to as "earthquake duration". It is this concept that Bisztricsany first used to develop his Earthquake Duration Magnitude Scale employing surface wave durations.

Earthquake Duration Magnitude [Md] Development

In 1965, Solovev proposed the use of total duration instead of the duration of surface waves. In 1972, Lee et al. used coda duration for the first time to estimate Richter magnitude of local Californian earthquakes. Based on their study, they suggested that it is appropriate to estimate the magnitude of local earthquakes using signal duration. More recently, the development in instrumentation led to the use of signal duration to estimate the coda magnitude (Md) for earthquakes recorded on short-period vertical seismographs. Numerous studies determined the relation between coda duration and magnitude for different regions of the World. According to a recent study by Mandal et al. (2004) previous studies showed that duration magnitude estimation is quite stable for local earthquakes ranging from magnitude Md 0.0 to 5.0.

Md Empirical Relationships

In two most recent investigations using statistically stable samples for Italian earthquakes [approximately 100,000 events over the period 1981-2002 in the Richter local [ML] magnitude range of 3.5-5.8] and for Indian earthquakes examplified by an aftershock sequence of 121 events with Ms (surface wave magnitude) > 4.0 in 2001 in the Bhuj area of northwestern India, the latest EMPIRICALLY derived equations for Md determinations are published:

$$Md = 2.49 log10 (T) - 2.31 +  Station   Correction   Factor$$         ( Castello et al., 2007)

$$Md = 0.80 log10 (T)2 + 1.7 log10 (T) - 0.87$$        ( Mandal et al., 2004)

ML from Md

Although conversions between empirically derived "sensitive" seismic parameters such as earthquake magnitude scales is mathematically cautioned as well as physically limited, some seismologists such as Brumbaugh have nevertheless attempted to produce a relationship linking ML to Md for a rather small sample of 17 events in Arizona:

$$ML = 0.936 Md - 0.16        +/- 0.22$$

Earthquake Source Physics
Preamble

A scientifically sound comprehension of earthquake souce physics has remained an illusive goal. Over the past 70 years, specialists in mathematics, physics, geology, geophysics and seismology worldwide have studied the physics of earthquake sources, tectonic stress accumulation, strain energy release as well as crack propagation and earthquake fault motions. Similarly earthquake engineers have investigated structural vulnerability, vibrational and damage response to hazardous earthquake strong ground accelerations. The wealth of specializad information and research results thus presented have formed a solid foundation for focusing on the reality behind the physics of earthquake source rupture.

Historical Background

The first plausable conception of earthquake source physics was published by Reid in 1910 in conclusion to his studies on the 1906 San Francisco earthquake [Ms = 7.8]. As coined by Reid, the The Elastic Rebound theory which envisages a progressive regional crustal movement, causing elastic deformation of the ground until a breaking point is reached. At this instant in time an earthquake fault is generated with strain energy release and displacement across the fault occurs. Reid's idea of progressive regional movement causing the gradual building-up of tectonic stress which is abruptly released as strain energy in occasional earthquake occurrence appears to conform with current understanding of the theory of plate tectonics.

The available litrature on seismic and/or earthquake source theory and analysis is prolific and well established. Considering only textbook sources the following (arranged alphabetically) are outstanding: Aki & Richards (1980), Anderson (1989) , Bolt (1987) , Bullen & Bolt (1985) , Cox & Hart (1986) , Gutenberg (1959) , James (1989) , Jeffreys (1959) , Richter (1958) , Rikitake (1976) , Rikitake (1982)

Earthquake Source Physics 

Conception

Earthquakes result from rapid dislocation and slip on a fault system in the upper lithosphere of the earth. The Elastic Rebound Theory of earthquake generation developed by Reid (1910) was amongst the first attempts at explaining the concept of earthquake mechanism. Regional tectonic stress accumulation causes elastic, followed by plastic deformation until the acting stresses exceed the strength of the rock masses leading to sudden RUPTURE thus causing an earthquake. Stored strain energy is released and the ground is faulted leading to dislocation and FAULT DISPLACEMENT & SLIP.

Earthquake fault displacements and the associated tectonic crustal stresses and strains are described by rather complicated constitutive and analytical equations collectively known as DISLOCATION THEORY. From the purely physical and conceptual viewpoints, there are two types of distinctly clear dislocations depending on the overall relative movements on the fault surface. A SCREW dislocation is one in which the displacement is PARALLEL to the dislocation axis while an EDGE dislocation is one in which the displacement is PERPENDICULAR to the dislocation. It must be emphasized at this juncture that these two extreme cases are purely hypothetical. In actual observations the analyzed final motions on earthquake faults are modelled as combinations - in various degrees - of both screw and edge dislocations. According to Stacey (1992) The large San Fransico 1906 earthquake [Ms = 7.8] was modelled as a screw dislocation while the Alaska 1964 great earthquake [Ms = 9.2] was approximated by a "double" edge dislocation.

One of the most comprehensive and up-to-date advanced seismology texts describing seismic or earthquake reference texts is that of Aki & Richards (1980)(op. cit.). Most of what follows is therefore taken from this important source.

According to Aki & Richards, the kinematics (i.e, only considering velocities and accelerations generated by seismic energy release) of an earthquake source as observed and studied from the far field (e.g, epicentral distances > 500-1000 km) necessates the mathematical formulation and generation of displacement waveforms of P- (compressional) and S- (shear) waves for faulting in a homogeneous, isotoropic infinite (i.e, unbounded) medium. With the introduction of FIVE SOURCE PARAMETERS viz, Fault Length L, Fault Width, W, Rupture Velocity υ, Permanent or Final Slip D and Rise Time T and employing a fault model with unidirectional propagation, the displacement waveforms of P- and S-waves are described by a simple integral of the form:


 * Indented line Ω(Χ,t) = ʃ ʃ Δu(ξ,t - r/c) dΣ   ......................(1.1)
 * Indented line Σ

where:
 * Indented line Δu = slip function for a shear fault rupture

--Dr Khalid FAHMI (talk) 18:18, 16 May 2010 (UTC) Modeling& Prediction In the mid 70's earthquake source theory research attempted to model large (actual) earthquake ground motions from the coda of smaller (real) earthquake records through a mathematically and physically acceptable non-randomized but rather successive and consecutive building-up or summation process of the maximum recorded amplitudes (Fahmi,1980). This so-called augmentation or empowering process is cross-checked on the seismic spectrum by observing a gradual – sometimes “painstaking” – logarithmic increase in spectral power of the maximum amplitudes of both body and surface waves. In actual theory and observation this so-called “build-up” was first researched by Fahmi (1980). In this work, it was clearly and unequivocally concluded that there is reasonable evidence to suggest that this situation of “successiveness” & “consecutiveness” as a triggering mechanism of earthquakes is -in actual fact - not too far from the well established theories of earthquake source physics, occurrence and crack propagation (see for example: Haskell, 1964, 1966 & 1968; Aki, 1967 & 1968; Anderson, 1976; Das & Aki, 1977a & 1977b; Kanamori, 1973 – and the list goes on and on !!!). In fact, the experimental phase of Fahmi’s research proved that such a case was practically possible and could occur in at least two earthquake prone regions on opposite sides of the globe, viz, The Chilean Nazca plate earthquake source region and the Triple Plate Junction of the New Britain - New Ireland Papua New Guinea earthquake region (op. cit.).

Nearly two decade later in 1999, Ian Main published a very attractive DEBATE in the scientific journal “Nature” essentially corroborating Fahmi's original approach of “non-randomization”. Main (1999) clearly states that the earthquake generating process “is not completely random”. He goes on to say that the seismogenesis of earthquake phenomena POSSESS AN OBSERVED “SPATIO-TEMPORAL FRAMEWORK stemming from a characteristic- mathematically and physically understandable –THEMATICALLY NATURAL OCCURRING EVENT. Main (op. cit.) goes on to say that the scale invariant nature of faulting morphology, the earthquake recurrence distribution, the SPATIO-TEMPORAL clustering of seismicity, the generally and relatively constant value of the dynamic stress drop parameter and finally the apparent EASE with which earthquakes may be TRIGGERED by small PERTURBATIONS in the overall tectonic stress distribution in the lithosphere, are all testaments - to a certain degree - of DETERMINISM and PREDICTABLILITY in the properties of earthquake phenomena.

One of the PRINCIPAL challenges to seismologists working in the field of EP is the TIME SCALE DEPENDENCY [TSD] of earthquake occurrence. In what follows we will review the studies of some of the most eminent earthquake prediction seismologists currently living or since diseased. During February 10 & 11, 1995 an outstanding gathering was held at the National Academy of Sciences in Irvine, California, U.S.A. The colloquium was called “Earthquake Prediction: The Scientific Challenge”. Chaired by the eminent Leon Knopoff of UCLA and organized by top seismologists like Kaiti Aki, Hiroo Kanamori, V. I. Keilis-Borok, Lynn Sykes, Clarence Allen & James Rice (visit: www.pnas.org). All in all 18 research papers were presented. For our purposes and in relevance to this section we will report the ESSENCE of Kanamori’s SHORT-TERM “EP” [STEP ~ 1-5 years], V. I. Keilis-Borok’s INTERMEDIATE “EP” [ITEP ~ 5-15 years] and Syke’s INTERMEDIATE - & LONG TERM “EP” [LTEP ~ 15-30 years]. Dr Khalid FAHMI (talk) 16:56, 8 May 2010 (UTC)