Weather forecast – models, data and (voluntary) biases

I have always been fascinated by weather forecast. I remember myself as a kid (probably 9 or 10 yo) drawing clouds on a sheet of paper, measuring (rudimentary, and surely not accurately) temperatures and winds, trying to find patterns and make a sense of all that beautiful mess up in the sky.

Little I knew about the laws of fluid dynamics, thermodynamics and chaos that drive the weather.

And I was way off, with my forecasts. Embarrassingly off.

Spending most of my time in UK I always wonder if the met-office has got any better over time.

Surely the laws of fluids and thermodynamics haven’t changed, are pretty much deterministic, and now we have so much processing power that we should be able to get it 100% right, correct?

Or maybe not.

It really is down to the chaotic nature of our atmosphere, and to the fact that those laws (their equations in reality) are non-linear. And when something is non-linear, it means that little variations in inputs have big impact on the outputs. Not only they are non-linear, they are also differential, which makes things even worse (i.e. the inherent ‘messiness’ propagates faster and larger).

So if you register 0.1C, 3 knots or 2 mbars more or less in a few observations, it could mean a storm instead of sunshine.

And when you are registering hundreds of data points over a period of time… you get the picture.

When you put it all together, the only thing you really know is that what you have registered is not 100% accurate, or at least not 100% representative, for a number of reasons. And when you put that into the model, with lots of differential equations, good luck. Because differential non-linear equations are the nastiest type.

So what do you do?

Well, you input scenarios into the model, each one with tiny variations of the inputs, and collect all the outputs. And you work out all the probable outcomes. You’ll have a certain number of outcomes with sunshine, some with rain, some with a storm, and some with snow. And based on frequency of outcomes, some statistics and further modelling, you’ll work out your forecast. This is called ensemble forecasting. So when a weather forecast says sunny, it might mean 50% sunny, 30% cloudy, 20% rainy. Let’s call it sunny. But it might still rain. Although not so much.

There is one final step, before releasing the forecast. And that is the forecaster adjustment / bias. Some forecasters over-rate some scenarios, especially the low probability rain scenarios. For example, they might have a 5%, but report a 20%. And this happens consciously (it has happened for more than 10 years, enough time to correct the bias!). Why? Some forecasters think that increasing the probability of rain when it is low, increases the usefulness of their forecast (people will take an umbrella). At higher rain probability, there is no bias. It’s called the ‘wet bias’.

Most likely this doesn’t happen in UK. Everybody always carries an umbrella here.

Leave a Reply