At a recent international workshop on urban pluvial flood modelling for the Rain Gain project Alex Grist from Richard Allitt Associates presented a paper sharing good practices for enhancing the calibration of urban drainage models including looking at how social media can help data quality.
The RainGain project seeks to obtain detailed rainfall data at an urban scale, to use this data to analyse and predict urban flooding and to implement the use of rainfall and flood data in urban water management practice to make cities more resilient to local rainfall-induced floods.
The paper presented included work on improving rainfall estimates through the merging of radar and rain gauge data, mapping the quality of the asset data available for calibration, as well as highlighting the idea of using photographic and video records from social media for verification of surface water flooding.
Essentially we build hydraulic models in order to:
- Simulate drainage networks
- Look at fluvial run off
- Evaluate surface water
- Assess impacts for sewerage networks
- Observe integrated systems
- Assess flood risk
- Test scenarios
- Calculate impacts
- Create scheme designs
Modelling outputs are then used for:
- Evaluating strategic financial investment
- Investment in upgrading existing assets
- Investment in new flood defences – output contributes to the evidence base to build the case for flood relief schemes
- Implementing flood warning systems – output is used to calculate the benefit provided by schemes
- Provide levels and volumes to inform detailed design
The commercial importance of these outputs means that there needs to be a high level of confidence in the model.
Thus ensuring the quality of input data is essential to building a model in which a high confidence can be placed. It goes without saying that where the model can be tested against accurate recorded data greater confidence can be achieved, however the higher the confidence in the input data, the greater the confidence there is likely to be in the model outcomes.
There are a number of ways in which the current situation could be improved upon including:
- Merging rain gauge and radar data to capture both the accuracy of rainfall measurement and spatial variation of the storm event.
- Using the Nash-Sutcliffe Efficiency Coefficient – a formula for comparing predicted with observed data.
- Looking at the principle of model confidence, including both the underlying asset data and verification achieved.
Measures such as this build on existing guidance but with respect to overland flow there is no definitive guidance on acceptable levels of verification and in particular the data being collected for storm verification has very often been anecdotal relying on the public for information.
This brings with it issues, as it is cannot often be relied upon for accuracy. Notably though the quality of information is changing and can now be considerably enhanced through social media.
Platforms such as Twitter now carry a wealth of information regarding an on-going or past flood event. Typically most people have a camera in their pocket and can instantly upload images, most notably this information (through platforms such as Twitter) will be time stamped. This makes it easier to know at what point in the flood timeline the image has been gathered.
By stitching this information together and identifying street furniture and other objects with known dimensions a more realistic picture of the flooding event (depths and times) can be built.
Social media can be a rich source of information and should form part of a formalised code of practise for collating flood data in ‘real time’ for use in verification. Of course data should come from a variety of other sources too but recognising this growing source of data and formalising its use in the process is an important step forward.
Money spent to improve verification is always an investment in security of schemes and services, however, the additonal benefit of data gathered from social media is its freely available nature.