A global approach to predicting flash floods

Share
Tim Hewson and Florian Pappenberger

ECMWF scientists Tim Hewson and Florian Pappenberger believe it would be possible to predict flash floods across the globe at a fraction of the cost associated with current systems.

Their proposal to extract the required information from readily available global weather forecasts earned them a place as finalists for this year's Harry Otten Prize for Innovation in Meteorology.

Tim and Florian, who are both Principal Scientists in ECMWF's Forecast Department, were awarded a runner-up prize of 2,500 euros at a ceremony in Sofia on 8 September. As they explain below, they are hopeful that their idea can be turned into reality.

Why is it important to be able to predict flash floods?

For some countries flash floods are the most deadly of all natural disasters, so if we were able to predict them accurately many lives could be saved. This could be brought about through evacuation and 'transport curfews'. Other mitigating actions could also be taken to protect property and infrastructure. It's difficult to say how frequent flash floods are, though here at ECMWF we have been putting together a flash flood database, which suggests that over Europe there are hundreds of events every year.

What are the limitations of current flash flood prediction systems?

Predicting flash floods is arguably somewhat less involved than predicting broader-scale flooding events associated with large rivers, because one does not necessarily need to use detailed hydrological models. Studies suggest that being able to predict extreme rainfall over a small area would be sufficient. However it has proven extremely difficult to do even that.

Using high-resolution weather forecasting models is one way to potentially address this problem. By 'high resolution' we mean here that the grid spacing these models work with is in the order of 2 km. The target has been the explicit representation, within those models, of the storm clouds that are responsible, and of all the complex physics that is going on within those clouds. Such models can predict when extreme rainfall is more likely, but in general they will not put the extreme rainfall events in the right locations. Typical errors in positioning are in the order of 40 km.

Running and developing high-resolution models is also a very costly business since it requires top-of-the-range supercomputing facilities and considerable human resources to develop the programs that run on these computers. Furthermore, their usefulness depends on the availability of high-density observations, and it increases if they are run in ensemble mode, which implies even greater costs. The high costs mean that this approach is viable only for a small fraction of the world.

Vehicles submerged in flooded street

Flash floods can cause fatalities as well as substantial damage. (Photo: Thinkstock/iStock/fotonazario)

How does your proposal address these issues?

Our proposal involves turning conventional wisdom on its head. As outlined above, the conventional approach has been to use high-resolution models; in our proposal we use instead lower-resolution global models, which provide output worldwide, and we apply statistical techniques to those models to obtain a measure of how likely flash flooding is at particular locations in different weather scenarios.

To give a simple but highly relevant example: if the global models are forecasting heavy showers (which they are quite good at) and if the atmospheric winds at the level of the showers are light (this information is also accurately portrayed by these models) then those showers won't be moving very quickly. This means that at certain locations they can last a relatively long time (say 1 to 6 hours) and extreme rainfall will result at those locations. In reality the principles we apply are a bit more complicated than in this simple example, but not by much.

In essence in our approach we are trying to estimate how variable the rainfall totals will be across a region, on scales of a few kilometres. If the rainfall is very variable, as in showery, light-wind situations, but the average rainfall amount in a region (which again the global models are quite good at) is large, then we know that some locations will see extreme totals, even though we cannot pinpoint exactly where the extreme rainfall will be.

Another benefit of our approach is that we could provide forecasts for lead times up to 10 days and beyond, whereas high-resolution models are generally not run beyond a few days.

In summary, our approach is a useful complement to high-resolution models, extending the prediction of flash floods to the medium range and over the whole globe.

How do you extract information on local variability from the global model?

The local variability is estimated using simple parameters taken from the global model output, such as wind speeds. This is straightforward to do, using data that is regularly archived. Other parameters that we have looked into include whether or not the model deems the rainfall to be showery in nature, how complicated the underlying topography is, how unstable the atmosphere is (which is a measure of how energetic the storms can potentially become) and how much variation there is in winds in the vertical.

We have looked at a number of cases, such as the Tbilisi flash flood in June this year, which made the news when it killed over 20 people and released zoo animals onto the streets. In these cases there are very clear signs that these ingredients are important. And in a much more extensive study we applied standard statistical tests and found that the relevance of the parameters that we tested was way beyond question (significance levels exceeded 99.9999%!).

How far is your idea away from being implemented?

We estimate that to get a system up and running, to provide probabilistic flash flood forecasts for the whole world using the highly respected ECMWF global ensemble system, would take about six person-months. And with a further six months of work a more refined system could be created.

The development work would first involve refining the relationships that we have already identified between global model parameters and local rainfall variability. Then in stage two we would investigate whether or not other parameters could be used. Stage three could involve some cross-checking against the ECMWF flash flood database, and then in stage four code would be generated according to previous findings, and made operational.

There are strong signs now that this development will happen. Resourcing has been something of an issue, though we do now have clear interest from a funding agency.