Punta Telegrafo - Forecast improvements with a weather station

Learn why even the models with the highest possible resolution have weaknesses and how we overcome them using the example of Punta Telegrafo in Italy.

In some regions, weather forecasting is easier than in others. For example, weather forecasts for mountainous regions, such as the Alps, are often more difficult, mainly due to the relief, which strongly influences the local atmospheric conditions. The weather in a valley south of a mountain can look quite different from the valley north of the same mountain. This is related, for example, to radiation, wind and turbulence.
Numerical weather prediction models
often have problems recognizing small-scale and rapidly changing weather phenomena due to their lower resolution because the topography can vary drastically, even within a small grid of only a few kilometres. Consequently, temperatures at the top of a mountain are overestimated, especially in warmer areas, because there are also lower-altitude locations within the same grid. To overcome the described problems, meteoblue uses the so-called meteoblue Learning MultiModel (mLM). The mLM uses post-processing methods in which the output of the raw models is combined with current local observational data. The mLM is thus fed with measurements from local weather stations, satellite and radar observational data and searches for the best model combination based on these data, which is then further refined by directly applying nowcasting data and a topography model.

In the following article, we would like to specifically show an example that shows how the installation of one weather station on a mountain improved the weather forecast:

One of our weather-enthusiastic users installed a weather station on the Punta Telegrafo, a mountain located just north of Verona on the east side of the Garda Lake (Italy). With the assumption that our temperature prediction for this location was constantly too high, the hope was that local measurements could reduce the prediction error. After feeding the measurement data into our mLM, it first needs about 2 weeks to be "trained" with these data. We used the measurement data in the first step to validate the previous prediction. The forecast then undergoes the greatest step of improvement after the adjustment period. Our short-term verification (third image) shows exactly the point at which a major effect in the forecast becomes apparent: since July 7th, 2023, the forecast error for this location has decreased significantly, as before we had a forecast error up to 6°C.

After the first large improvement, the model training continues whenever the mLM receives new measurement data, leading to continuous improvements. The second plot shows the short-term verification of a few days later, and it is visible that the error has further decreased.

This example shows how valuable observational data are and that our prediction process is intelligent enough to learn from these measurements. However, it needs to be ensured that measurement data have good quality so the model is learning with the right data, which is guaranteed through multiple quality-control barriers that filter wrong measurement data. This guarantees that the forecast is still accurate even though a weather station delivers wrong data. Since this is not an edge case, and measurements often have gaps, are incomplete or wrong, quality control is essential. Moreover, we also ensure that the forecast quality is high when no weather stations are available by applying other technologies.

Napiši komentar

Potrebno je da imate meteoblue nalog da biste mogli da komentarišete članke
Nazad na vrh