Scientific challenge: predicting earthquakes or avoiding them?
Almost every time, after a large earthquake occurs near a residential area, questions are raised in the media as to "why the event was not anticipated?".
The argument is that a successful forecast would reduce loss of life, if not necessarily economic damage, by allowing the evacuation of dangerous buildings, the clearing of tsunami-prone coastal areas, and the readiness of hospitals and rescue teams.
However, most natural hazard experts argue that forecasting is at best the pinnacle of seismology and at worst has adverse effects on our ability to manage disasters.
First, we need to clarify what is meant by "Earthquake prediction". What we mean is that before an earthquake occurs, a correct and relatively accurate estimate of magnitude, location, and time of its occurrence is made.
For a forecast to be useful, the earthquake in question should preferably be strong (magnitude 6 or higher) and should include all three.
Of course, we must provide a prediction that is logical and within the framework of rational reasoning, and in accordance with known scientific methods.
So, predicting that an earthquake with a magnitude between 7.4 and 7.6 may occur in a specific location between 4:00 PM and 8:00 PM on a specific day is very good! And of course, predicting that an event with a magnitude of 2.0 to 7.6 will occur in the entire country of Iran or a region of it or a large area inside and outside the country, for example, in the entire month of August, is definitely not useful (and of course, it will certainly turn out to be wrong.)
Pay attention to this Prediction clamming (recently performed and published in Iran): "Given that there was a strong earthquake with a magnitude of 4.2 on 24 July 2022 in Bandar Khamir (South Zagros, Hormozgan) area, based on the proposed model, the probability of the next earthquake occurring in the period of 25 July to 25 August has been identified and by examining the seismographic data from 26 June 2018 to till 24 July 2022, 755 severe earthquakes have been reported in Iran.
By putting together the time of occurrence of earthquakes, it is observed that the time interval between the occurrence of a strong earthquake and the next strong earthquake is a maximum of 29 days, which means that whenever we have witnessed a strong earthquake, the next earthquake occurs at most 29 days or less have given."
Ready such a prediction report, we may conclude that this type of forecast contains several fundamental errors, and shows that the forecaster is not even familiar with the basics of the Science of Seismology: first, he calls an earthquake with a magnitude of 4.2 to be "strong" (a strong earthquake has a magnitude equal to or greater than 6), and then clearly there is no reason why the desired time window was 4 years? And the basis of the "suggested model" is not clear and...
So, what are the drawbacks of announcing the results of prediction and relying on it? Let's leave aside the basic scientific and technical problems for now. The first problem of the work itself is "announcing" predictions, especially long-term and large-scale predictions.
For example, the Loma Prieta earthquake in 1989 (Northern California) caused significant damage in the San Francisco Bay area of California. Twelve hours after the event, the US Geological Survey (USGS) reportedly claimed to have "predicted" the earthquake in a report the previous year. Various other claims have been made about the prediction.
18 reports in 1990 claimed, "scientific predictions of the 1989 Loma Prieta earthquake were presented 'in retrospect'" (in this case the correct time and location predictions were made with such a wide window (e.g. covering a large part of California for five years) which had lost its predictive value. Predictions were also presented with a probability of only 30% in ten or twenty-year windows.
One of the discussed predictions used the M8 algorithm, which was originally proposed by the great scientist in this field, Vladimir Keilis-Borok, and his colleagues. Prominent Soviet scientist Kilis Borok - who immigrated to the USA after the collapse of the Soviet Union - received his doctorate in mathematical geophysics from the Academy of Sciences in Moscow in 1948.
He was the founder and director emeritus of the International Institute of Earthquake Prediction Theory and Mathematical Geophysics in Moscow. The forecast misrepresented both the magnitude (M 7.5) and the time (a five-year window from January 1, 1984, to December 31, 1988).
It then extended the time window to July 1, 1992, by including more of California and half of Nevada, and reduced the location (forecast target) to Central California. The range of magnitudes remained the same, but the magnitudes they presented were M7.0 earthquakes in central California.
In two revisions to the same model, the five-year time window for one expired in July 1989, and thus missed the Loma Prieta event (could not predict). The second revision extended to 1990 and included Loma Prieta Earthquake.
When discussing the success or failure of the Loma Prieta 1989 earthquake prediction, some scientists argue that the earthquake did not occur on the San Andreas fault (the target of most predictions) and instead involved strike-slip motion (vertical) and of course the horizontal slip component, and therefore not predicted.
Other scientists argued that the Loma Prieta earthquake occurred in the San Andreas fault "zone" and released much of the accumulated strain from the 1906 San Francisco earthquake.
Dr. Susan Hough, a well-known seismologist of the United States Geological Survey, believes that in this way the Loma Prieta earthquake was not actually predicted, but some predictions were made that were only partially successful.
Let's imagine a scenario where a long-term prediction is possible, and a situation where an accurate prediction is made today that a magnitude 7.4 earthquake will strike a hypothetical city. If we were 100% confident in our prediction, the city could be evacuated in advance, dangerous buildings demolished and emergency services ready.
But what will be the economic and social effect of this prediction in the next year? It is likely that many people will be evacuated, businesses will be closed, and the economy will suffer, so with such a forecast, the economic and social cost to the assumed city will be very high, and may actually be more than the cost of the earthquake itself. This is made worse when we consider that the forecast cannot be 100% reliable - in fact, it is far from that - meaning that it could be a false warning, or it could very well be an overestimate.
And by the way, it is the place where the mistake was predicted. So if the economic and social effects of a very long-term forecast are problematic, what about short-term forecasts? It can be predicted that the same earthquake will occur in the given city in 24 hours. This avoids long-term economic and social effects but allows for a high level of preparedness. Again, buildings can be evacuated, hospitals prepared, schools closed, etc.
This is attractive, but the practical problem again lies in the uncertainty in the prediction. Suppose this prediction is completely correct in time and magnitude but mispredicts the location by 200 km. If the population is moved from the forecast area to the actual epicenter, this can have catastrophic consequences. This work can make the destruction of the earthquake much more serious than when no prediction was made!
Meanwhile, suppose that the location and magnitude of the earthquake are exactly right, but three days later than predicted! There is a high probability that the population will begin to return to the affected area and become more vulnerable than previously anticipated.
In fact, the mechanism of earthquakes makes them more difficult to predict. Some people think that a tectonic earthquake is like a bomb that explodes at a point underground, and energy waves travel away from that point. But the mechanism of real earthquakes is different.
In fact, an earthquake occurs as a result of the movement of two blocks on the surface of a fault, an underground surface that is usually so deep that it is not accessible - except by seismology - and waves of energy are emitted from every point of that rupture surface.
In fact, an earthquake begins with a rupture event that causes slip and then propagates along the fault plane over a period of time, usually seconds to minutes. In this sense, it has nothing in common with the model understood by non-experts (in the form of an underground bomb).
Note that non-specialists include educated people who have attained high scientific degrees in different specialized fields and have general scientific knowledge, but they do not understand the phenomenon of earthquakes and geological processes of stress concentration in the crust and deformation.
Many of the efforts of this group of researchers also lead to the production of pseudo-scientific content and results, which of course create bigger problems! Many people do not have the possibility to distinguish and separate the specialized areas, and judging this type of activity leads to a dignified confrontation with the provider of this type of prediction and faces a challenge.
In the scientific studies of earthquake prediction, in addition to the studies of seismic waves, other fields also play a role in the prediction of earthquakes through the science of seismology.
For example, geological studies provide information on the slip rates of active faults and the occurrence of historical earthquakes. These findings can be used to infer the future behavior of faults and earthquake potentials. By studying the landforms and geological units with known age that the faults have changed, it is possible to determine the displacement of the two sides of a fault relative to each other.
Under favorable conditions, even fault displacements during historical earthquakes can be detected and their size and approximate age can be determined. Significant fault displacement is generally accepted as evidence for large earthquakes, and thus the seismogenic history of a fault can be traced back thousands of years by geological studies.
Exploratory trenches across active fault zones are valuable for such studies and their use became common in the 1970s. For example, in 1984, American scientist Kerry Sieh published pieces of evidence of 12 earthquakes that occurred between 260 and 1857 AD along a section of the San Andreas fault, from trenching in Southern California.
His study provides the best evidence for a return period of about 145 years for major earthquakes on the San Andreas Fault in southern California. Other studies, such as geological measurements, magnetic and electrical measurements, and hydrological and chemical analyses, also greatly contribute to our knowledge of the physical and chemical states of rocks and provide clues as to whether rocks are on the verge of failure.
In addition, rock deformation experiments and measurements of the physical properties of rocks provide data essential to our understanding of the earthquake process. The history of seismological studies, especially after 1960, shows that major advances occurred shortly after the accumulation of quality seismic data beyond that previously observed.
For example, shortly after the development of several hundred seismographs in the global digital seismographic network in the early 1990s, the model of the Earth's internal structure was rapidly refined. In the 1930s, the determination of seismic velocity, density, and other physical parameters for a spherical Earth model was completed by Bullen, Gutenberg, Jefferies, and other pioneers of 20th-century seismology.
The establishment of a standardized global seismograph network in the early 1960s (with seismographs) enabled the study of global seismicity and focal mechanisms on a scale not previously possible. As a result, seismology made a significant contribution to the development of the theory of "plate tectonics" in the late 1960s.
Currently, with the availability of digital seismic data at the local and global scale, major advances in earthquake seismology are expected. In addition, advances in computers with increasing computing power but the decreasing cost are important for seismologists to process and analyze data in order to gain insight from the ever-increasing volume of collected seismic data. In the 1990s, the importance of monitoring strong ground motion, especially in urban areas, was recognized.
The accelerometric networks were developed with more than 1000 digital accelerometers in Taiwan, Japan, and Iran. An extensive dataset of severe ground motion in the Chi-Chi (Taiwan) earthquakes of September 20, 1999, and December 26, 2003, Bam (Iran) clearly showed that these near field data not only provide the information needed for earthquake engineering but also help to better understand earthquakes.
These strong ground motion data—especially those recorded in the vicinity of the fault—showed why previous attempts to predict earthquakes had been unsuccessful. The science of earthquake prediction is still in development and our current ability to predict earthquakes is limited.
From various pieces of evidence, it is possible to identify the areas where destructive earthquakes occur. We may even have an approximate estimate of their magnitude and the number of times they occur, but we do not have the ability to accurately predict when they will occur.
Extensive earthquake prediction research programs have been implemented in China, Japan, the United States, and the Soviet Union, and progress has so far been slow. The fact is that any kind of "announcement and notification" of the results before the earthquake, caused by pseudo-scientific or even scientific works, for any purpose or motive, can be predicted to lead to anxiety and pressure on the audience and both forecasting teams and in practice it leads to the limitation of earthquake forecasting activities.