Though nowhere near as severe as the December tsunami that left over , people dead in the Indian Ocean, this latest quake-generated behemoth wave is a reminder of the volatility of the ocean floor in this part of the world. It also made me wonder, if we know that this part of the world is so prone to tectonic activity and the devastating waves it creates, can we do anything to predict it?
Currently, scientists track tsunamis with surface instruments such as devices on buoys that record small changes in sea-surface elevation.
However, this method is spotty, as it requires that a reader be placed in the correct location, which could theoretically be anywhere. Also, this type of detection provides very little advance warning because it detects the wave as it passes.
According to Science Magazine , in researchers as the National Oceanic and Atmospheric Administration NOAA realized that tsunamis could possibly be detected by monitoring the turbulence the waves create in the lower atmosphere. This theory was unwittingly corroborated by data from a satellite that happened to record atmospheric activity during the Sumatra-Java earthquake.
Unfortunately, technology this complex takes years to become operational. In the mean time, there is often little to no warning of an impending tsunami. On Tuesday, by the time the Pacific Tsunami Warning Center issued an alert the wave had already struck.
If a threatening tsunami passes through and sets off the gauge stations, a tsunami warning issues to all potentially affected areas. Evacuation procedures in these areas are then implemented. These recorders are used to detect slight changes in the overlying water pressure. The DART system can detect a tsunami as small as a centimeter high above the sea level. NASA is also heavily involved in the quest to predict deadly tsunamis before the occur.
In the future, such a system may enable more effective advance warning of incoming waves. In the instance of the Japan tsunami, the warning systems worked fine. Rather it was the unanticipated size of the event that proved so deadly. That leads us to the biggest problem with tsunamis: Once in motion, they can't be stopped. Scientists and civil agencies can only devote resources to predicting tsunamis and creating effective plans for protecting coastal areas from their ravages.
These centers monitor for tsunamis and the earthquakes that cause them, forecast tsunami impacts, and prepare and issue tsunami messages. Pacific and Caribbean territories, and the British Virgin Islands. It also provides forecast information to international partners in the Pacific and Caribbean and adjacent regions to help them understand the threat to their coasts so they can decide whether or not to issue alerts.
Like the earthquakes that generate most tsunamis, scientists cannot predict when and where the next tsunami will strike. But based on their knowledge of earthquakes and past tsunamis, scientists at the tsunami warning centers know which earthquakes are likely to generate tsunamis and can issue tsunami messages when they think a tsunami is possible.
To provide timely, reliable, and accurate tsunami warnings and forecasts, warning center scientists must be able to quickly detect and analyze earthquakes and observe and measure tsunamis.
To do this, they depend on information about earthquakes and tsunamis collected from seismic and water-level networks from around the world. Seismic networks consist of seismic stations that detect, measure, and record earthquakes and other types of ground motion and transmit information to the warning centers in real time through satellites and other systems. Seismic waves travel about times faster than tsunamis, so information about an earthquake is available before information about any tsunami it may have generated.
Since a tsunami can strike nearby coasts within minutes, warning center scientists use an earthquake's preliminary seismic information magnitude, location, and depth to decide if an earthquake could have generated a tsunami and if they should issue an initial tsunami message. For U. These messages provide alert levels warning, advisory, watch, information statement , preliminary information about the earthquake, and an evaluation of the threat. Strong currents.
Because of the need for a surface buoy, it is important to avoid strong current regimes, which could cause swamping or dragging of the buoy, or could make buoy maintenance difficult.
However, the high cost of DART acquisition and maintenance may preclude any significant network growth. Looking to the future, the committee con-. Even though tsunamis do not occur frequently, redundancy in the array is still desirable. The surface buoy has two independent complete communication systems for full redundancy. In addition, in high-risk source regions, a certain amount of overlap in spatial coverage is desirable so that instrument failures may be partially compensated by having more than one DART in the region capable of providing a timely, high-quality signal.
Bottom roughness. For reliable communications, the BPR must be deployed on a reasonably flat, smooth seabed that will not produce scattering and interference of the acoustic signals. Although DARTs are typically deployed for two years, and have a design life of four years, there is considerable expense associated with deploying and maintaining them in remote regions. For some sites, co-locating DART buoys with other buoy arrays might allow leveraging ship time and maintenance costs if there is no conflict with special DART requirements.
Other considerations in choosing buoy sites include the difficulty or ease of obtaining permissions to enter other national EEZs Exclusive Economic Zones , shipping routes, seafloor infrastructure e. These parameters of the DART network clearly deserve frequent re-consideration in light of constantly changing fiscal realities, survivability experience, maintenance cost experience, model improvements, new technology developments even new DART designs , increasing international contributions, and updated information on the entire suite of siting issues listed in Box 4.
In addition, simulations of the effectiveness of the DART network, under. The potential contributions of optimization algorithms to the design process have not been exhausted. A component of the periodic re-evaluations of the DART network needs to be the re-evaluation of the prioritization of each group of DART stations, not just individual stations, with detailed justifications for these determinations. In particular, the committee questions the rationale for the very low priority of the group of five DART stations deployed in the Northwest Pacific Table 4.
The Kuril Islands in particular have been the source of numerous tsunamis large enough to invoke tsunami watches and warnings. At the very least, DART stations covering the Kuril Islands would have a high value for the prevention of false alarms.
A list of criteria might include:. Depending on the order of importance of criteria such as these, quite different prioritizations of the DART stations might result. The value of the DART stations in the Northwest Pacific is primarily for the detection of medium to small tsunamis, in order to confirm that a large tsunami has not been generated and thus avoid the issuance of an unnecessary warning with its attendant costly evacuation.
There are no serious gaps in the geographic coverage of the DART network as designed, with regard to providing timely and accurate tsunami warnings and forecasts for at-risk U. However, the vulnerabilities of non-U. Recommendation: NOAA should regularly assess the numbers, locations, and prioritizations of the DART stations, in light of constantly changing fiscal realities,.
Since the build-up of the DART network began in , it has experienced significant outages that can have adverse impacts on the capability of the TWCs to issue efficient warnings, to use near-real-time forecasts, and to cancel warnings when a tsunami threat is over. The data loss also reduces post-tsunami model validation capability. Figure 4. The peaks in performance occur during Northern Hemisphere summer when maintenance is performed.
Note, however, that the peak values in performance are decreasing with time as well. Maintenance occurs in the summer months, accounting for the annual cycle in Figure 4. The declining trend in performance is emphasized in Figure 4. A system availability of 69 percent is significantly below the network performance goal of 80 percent, which perhaps is not surprising for such a large, new, and admittedly hurriedly-deployed set of complex systems that are deployed in very harsh environments.
The issue of low network performance is exacerbated by the fact that clusters of nearby DART stations tend to be nonoperational for many months, leaving large gaps in DART coverage. For example, five stations cover the Aleutian Islands west of the Dateline, past the Kuril.
Islands to Hokkaido. Although the Kuril Islands region produced many small basin-wide tsunamis over the past five years, all of these stations had failed by December , and four had failed in October , or earlier.
None were repaired until late June , after weather conditions had improved enough to reduce the risk of shipboard operations. The optimization scheme used for planning the locations of the DART stations and testing their ability to detect tsunamis basin-wide is based on an assumption of nearly percent performance Spillane et al. Given the current geographic coverage, the DART network is only useful for tsunami detection and forecasting if it is operational nearly percent of the time.
In a practical sense, when one DART station is inoperative, its neighbors on either side must be operational. If two neighboring DARTs become inoperative, then there must be an immediate mitigating action. Table 4. The committee has assumed that summer time maintenance cycles are, at least in large part, dictated by north Pacific weather.
If this is the case, the maintenance of the high-priority DART buoys may not be practical or even possible. The number of DART II system failures is higher than expected, with a current median time to failure of approximately one year when the design lifetime was four years Figure 4. To meet the mid deadline, the DART II was rushed to production and deployment without the customary level of testing required for a complex system like the DART, with its relatively extreme operational environment.
This rapid deployment schedule required an active reliability improvement program, concurrent with initial operations and funding, to sustain effective operations while reliability improvements were defined and implemented. However, budget cuts slowed both maintenance and reliability improvement. Furthermore, NDBC had no prior experience with seafloor instrumentation, acoustic modem.
Department of Commerce Office of Inspector General, The report found that technology transfers from PMEL to NDBC have not been well coordinated and planned, and it offered several recommendations to address these concerns, such as ensuring that data requirements and technical specifications are clearly defined prior to the transition and that adequate funding is available to cover the transition costs. The report also recommends better coordination on research and development projects between the two NOAA centers to avoid duplication of efforts.
By far the most common problem is mooring hardware failure. A system that requires unanticipated maintenance visits using costly ship time reduces availability of funds for other activities.
The committee analyzed the benefits and disadvantages inherent in each of these maintenance approaches. In order to maintain the current DART network configuration, adequate resources are needed for maintenance, including funding for unscheduled ship time to effect repair and replacement of inoperable DART stations.
The alternative approach would be to invest the majority of resources into improving the DART station reliability to get closer to the. In this case, it must be understood and acknowledged that the DART network might be fully deployed but will not be fully functional until such time as the reliability of the DART stations gets much closer to the design goal of a four year lifetime than the present median time-to-failure of just over one year.
A partial amelioration of the draconian choices above could come from exploring new maintenance paradigms, such as 1 simplifying the DART mooring for ease of deployment from small, contracted vessels that are available, for instance, from the commercial fishing fleet and the University-National Oceanographic Laboratory System UNOLS fleet; and 2 maintaining a reserve of DART buoys for immediate deployment upon the occurrence of a significant gap in the network, weather permitting.
Since the build-up of the DART network began in , it has experienced significant outages that have a potentially adverse impact on the capability of the TWCs to issue efficient warnings, use near-real-time forecasts, and cancel the warnings when a tsunami threat is over.
Worse, multiple, neighboring DART stations have been seen to fail in the North Pacific and North Atlantic, leaving vast stretches of tsunami-producing seismic zones un-monitored. This situation persists for long periods of time.
The committee considers it unacceptable that even a neighboring pair of DART stations in high-priority regions is inoperative at the same time.
Although an 80 percent performance goal may be satisfactory for the entire DART network, and for individual gauges, a much better performance is required for neighboring pairs of DART stations, especially in high-priority regions. Conclusion: Continued engineering refinements of the DART concept will allow NOAA to establish a more sustainable capability with reduced costs of construction, deployment,.
Recommendation: The committee encourages NDBC to establish rigorous quality control procedures, perform relentless pre-deployment tests of all equipment, and explore new maintenance paradigms, such as simplification of DART mooring deployment and maintaining a reserve of DART stations for immediate deployment.
Conclusion: DART presents an outstanding opportunity as a platform to acquire long time series of oceanographic and meteorological variables for use for climate research and other nationally important purposes. Potentially a DART buoy could also telemeter data acoustically from a seafloor seismograph although the demands on DART power would increase proportionally. The additional power requirements for acoustic and satellite telemetry would press the current design of the buoy thereby increasing risk to the primary goal of tsunami detection.
Nevertheless, broadening the user base could enhance the sustainability of the DART program over the long term and future designs should consider additional sensors. Other programs, such as the coastal sea level network, have encouraged a broad user base to enhance sustainability of their infrastructure.
Recommendation: NOAA should encourage access to the DART platform especially, use of the acoustic and satellite communications capabilities by other observational programs, on a not-to-interfere basis; that is, the primary application tsunami warning justifies the cost, but DART presents an outstanding opportunity as a platform to acquire long time series of oceanographic and meteorological variables for use for climate research and other nationally important purposes.
Broadening the user base would be expected to enhance the sustainability of the DART program in the future. Conclusion: In a world of limited resources, a strategic decision needs to be made as to whether it is more important to maintain the current DART network at the highest level of performance or to focus on improving the DART station reliability.
A first step could be for NOAA to establish a strategic plan that determines whether 1 it is most important to maintain the DART II network at the highest level of performance right now meaning that the first priority for resources is maintenance, including funding of costly ship time to repair and replace inoperative DART stations as soon as possible , or 2 it is most important that NDBC focus first on improving DART station reliability, at the possible expense of maintenance.
However, edited bottom pressure data are not available after and are awaiting review. However, with respect to the integration of the U. Tsunami near-real-time observation systems including seismic, water level, and oceanographic and data management systems including modeling and archiving are key elements of IOOS and GEOSS. The DART buoy platforms present an outstanding opportunity to acquire long time-series data of oceanographic variables for nationally important research and monitoring goals, including for climate research.
Giving other observational programs access to the DART platform especially, use of the acoustic and satellite communications capabilities provides an opportu-. Similarly, there is great value in the continued coordination of U. Conclusion: Because coastal sea level stations have evolved from their primary mission to serve a broad user community, their long-term sustainability has been enhanced. The following are conclusions and recommendations related to the detection of tsunamis with sea level sensors:.
Recommendation: NOAA should assess on a regular basis the appropriateness of the spatial coverage of the current DART sea level network and coastal sea level network U.
Especially, NOAA should understand the vulnerabilities of the detection and forecast process to the following: 1 gaps in the distribution of existing gauges and 2 failures of single or multiple stations. A first step in the assessment could be the establishment of explicit criteria, based on TWC forecaster experience and on the arguments outlined for the DART site selection Spillane et al. An appropriate aid in this process would be simulations e.
The contributions of optimization algorithms to the network design process could be explored more fully as well. Furthermore, this priority list should be merged with the results from the network coverage assessment above to determine the following: 1 maintenance priorities and schedules; 2 network expansion priorities; and 3 identification of critical stations that are not under U.
An important aspect of this activity would be to develop and publish criteria, such as the following examples: 1 value of a station for initial detection of a large tsunami near an active fault zone, to maximize warning lead time; 2 value of a station for initial detection of a medium to small tsunami, to mitigate false alarms; 3 value of a station for scaling forecast.
Recommendation: NOAA should assess on a regular basis the vulnerabilities to, and quality of, the data streams from all elements of the sea level networks, beginning with the highest priority sites determined per the recommendations above. The risk assessments, along with the prioritization lists described above, could be used to determine the following: 1 whether authority for a U. The committee would report to the management level within NOAA that has the responsibility and authority for ensuring the success of the U.
Tsunami Program. The oversight committee would be most useful if its members represented a broad spectrum of the community concerned with tsunami detection and forecasting e. In contrast to inundation models used for evacuation planning in advance of an event see Chapter 2 , near-real-time forecast models produce predictions after a seismic event has been detected, but before tsunamis arrive at the coast, which is the ultimate goal of the monitoring.
These forecast models make available to emergency managers in near-real time the time of first impact as well as the sizes and duration of the tsunami waves, and give an estimate of the area of inundation, similar to hurricane forecasting. The entire forecasting process has to be completed very quickly. For example, Hawaii Civil Defense needs about 3 hours to safely evacuate the entire coastline.
As most far-field tsunamis generated in the North Pacific take less than 7 hours to strike Hawaii, the entire forecast, including data acquisition, data assimilation, and inundation projections, must take place within 4 hours or less. Although this sounds like a comfortable margin, in fact it is quite a short time period compared to many other natural disasters, especially since it can take anywhere from 30 minutes to 3 hours to acquire sufficient sea level data Whitmore et al.
For hurricanes, forecasts are made days in advance of landfall and evolve spatially at scales over times slower than a tsunami. The time window for a forecast for a near-field tsunami event is even smaller, because the first waves may arrive in less than 30 minutes see the section on Instrumental Detection of Near-Field Tsunamis below.
The importance of forecasting the duration of wave arrivals, and forecasting the sizes of each arrival, is well known; for example, the largest and most destructive wave of the tsunami originating off the Kuril Islands on November 15, , was the sixth wave to hit Crescent City, California. This wave hit more than two hours after the first wave arrival Uslu et al.
Although time-of-arrival information has been available since the s Ambraseys, , only beginning in the s e. The use of near-real-time forecasting models is only possible because of data from the coastal and open-ocean sea level networks. Modeling tsunamis based on seismic data alone is currently not very accurate, as noted in the above section on Detection of Earthquakes. The importance of accurate forecasts of maximum wave height was illustrated quite clearly in the wake of the recent Chilean earthquake on February 27, These systems employ pre-computed, archived event scenarios, in conjunction with near-real-time sea level observations.
The PMEL system takes the forecast a step further by providing inundation distances and run-up heights that enable even more targeted evacuations. These forecast models allow the TWCs to make more accurate tsunami wave predictions than were possible without them, enabling more timely and more spatially refined watches and warnings e. The PTWC was able to forecast reasonably well the observed tsunami heights in Hawaii more than five hours in advance of the Chilean tsunami arrival Appendix J.
The models place an additional emphasis. Japanese scientists have been leading in tsunami forecast modeling, have had forecast models in operation for a while including for near-field events , and are able to draw from a very sophisticated, densely covered observation network. They are also very active in developing new methods for real-time forecasting e. In brief, the SIFT model identifies an interim wave field from its database based on the seismic data inferred source parameters and epicentral location once an earthquake is triggered.
As the tsunami arrives at sea level stations along its propagation path, tsunami amplitude data are used to improve the forecast by scaling the pre-computed free-surface distribution. Finally, the resultant scaled surface is used to initialize a boundary value problem and determine, at high resolution, the wave field, including inundation at the locations of interest. The three steps, in more detail, are as follows:. The seafloor displacement is computed by the linear-elastic dislocation theory and is applied for each unit earthquake, each representing a magnitude 7.
In addition, the database was developed for thrust events only and is now being updated for other types of earthquakes, particularly for the Caribbean region. By linearly combining the wave fields from adjacent unit sources, the most plausible and realistic tsunami scenarios are roughly inferred from the earthquake parameters. For example, a magnitude 8. Because the unit sources are arranged in a pair of parallel rows, larger events with widths on the order of km can also be represented.
Each archive includes data on the spatial distribution of wave heights and fluid velocities; this information is needed to initialize the boundary conditions, which is then used to calculate in near-real time the inundation in specific locales.
Data assimilation from DART station data is performed : In this step, near-real-time measurements of the tsunami are used to scale the combined wave field constructed from the database. Once the tsunami is recorded by the DART sensor, the pre-computed wave time series wave heights and arrival times are compared to and scaled using the observed wave time series by minimizing a least square fit.
This scaling process can achieve results as soon as the full wavelength of the leading wave is observed and is updated with observations of the full wave time series. When the wave arrives at the next buoy, the tsunami wave heights are corrected again, although the experience to. Inundation estimates using the nonlinear model, Method of Splitting Tsunami MOST , are developed : Once the combinations of wave fields from the pre-computed scenarios are constrained by the DART sea level data using the least squares fit technique, the database is queried for wave height and fluid velocity time series at all sea-boundaries of the region targeted for the inundation forecast.
At each boundary point, the time histories of heights and velocities are used to initialize the boundary conditions. The inundation computation proceeds using the nonlinear MOST model that includes shoaling computations of wave inundating dry topography, until inundation estimates are obtained. The process is built on the Synolakis theory of a solitary wave propagating over constant depth and then evolving over a sloping beach.
The wave field of approaching waves in deep waters are assumed to be linear, so there are reasonable interim estimates for the entire flow including reflection from the beach; i. Once there is a linear solution in the deep waters where depths are more than 20 m , this input can be used to solve the nonlinear evolution problem on a sloping beach Carrier and Greenspan, One of these stations is at an open-ocean island Midway Island at the northwestern end of the Hawaiian Archipelago; the other station is at the North American coast Santa Barbara, California.
In the open ocean, SIFT-predicted amplitudes although not the phases agree fairly well with the observed. To date the two technologies have successfully forecast 16 tsunamis with an accuracy of about 80 percent when compared with tide gauge data. Titov Titov et al. Although these models forecast wave height reasonably well, forecasting the inundation remains a challenge. As with the ensemble model approach for hurricane forecasts, the committee considers it beneficial to run and compare multiple model outputs.
However, a process is needed that assists watchstanders in reconciling the differences and arriving at a single forecast output to be transmitted in the warning products. Such a process is well established in the National.
The forecast models were run in near-real time before the tsunami reached the locations shown. The model data for Santa Barbara exhibited a 9-minute early arrival 0. Hurricane Center NHC or more generally the weather service, where ensemble modeling is commonplace. Conclusion: Metrics are needed to objectively measure each model performance. In addition, a process is needed by which multiple model outputs can be used to develop a single solution e. The utility of the methodologies could be improved by ensuring that TWC staffs undergo a continuous education and training program as the forecast products are introduced, upgraded, and enhanced.
Near-field tsunamis present a daunting challenge for emergency managers. Even if the near-shore populace is well informed about the potential for a tsunami when the ground shakes, and even if local managers receive information from forecasters of an impending. Nevertheless, successful evacuations have occurred during the recent events in Samoa and Chile.
0コメント