Why weather belongs in your contact centre forecast
The two-sided weather effect
Most contact centre forecasts treat weather as background noise — something that happens to the operation rather than an input the model takes seriously. That is a defensible position for some queues and a serious oversight for others. The cost of ignoring weather is usually felt twice in the same week: customer demand spikes because of the weather, and agent supply falls because of the same weather, and the operation discovers that the forecast it built around “average” conditions has missed both sides simultaneously. This article makes the case for taking weather seriously, sets out where to get the data, how to build it into a forecast, and the maturity curve from “we ignore it” to “we model it directly.”
The demand side
Weather changes customer behaviour, and the size of the effect depends on the operation. A motor insurance line sees claim calls rise sharply on snowy or icy days. A home insurance line sees a similar effect from storms, floods, and burst-pipe weather. A utility customer-service line sees calls rise when extreme temperatures push consumption up, and again when outages happen. A retail or delivery line sees the opposite — bad weather suppresses outbound consumer activity in ways that ripple into customer-service queues with a lag of hours or days. A travel or transport operation sees weather effects that are first directional (delays, cancellations, rebookings) and second second-order (compensation queries, complaints).
The size of these effects is often substantial. Insurance claim volumes can double on a heavy-snow day. Utility queries can rise by 30–50% during a heatwave-driven outage week. Travel disruption queries are routinely 5–10x normal volume on the worst weather days. A forecast built around the seasonal average without an explicit weather adjustment is, on those days, materially wrong — and the wrongness compounds because the operation has neither the agents nor the schedule to absorb it.
The supply side
Weather hits agent supply at the same time it hits demand, and almost always in the wrong direction. Snow stops agents getting to physical sites; even hybrid operations see absence rise on bad weather days. School closures (sometimes triggered by the same weather) drag in absence among parents who have no childcare backup. Storms cause power and broadband outages that knock home-working agents offline. Heatwaves stress older buildings without good cooling. Public-transport disruption affects agents who commute and the operation that depends on them.
The supply side is the one workforce planners most consistently under-estimate. The Friday-afternoon forecast for next week assumes a normal absence rate; the actual absence on Monday morning is double normal because the snow forecast a planner did not look at has already played out. The pattern that hurts most is the supply shortfall arriving in the same window as the demand surge. A queue that would have managed a 20% volume spike with a full team finds itself with 80% of the team and 120% of the demand at the same time.
What weather data to use
The good news is that high-quality weather data is freely available and getting better. In the UK, the Met Office publishes daily historical observations at multiple sites going back decades, and a free five- to seven-day forecast that is usable for tactical planning. Commercial APIs from the Met Office, OpenWeatherMap, WeatherAPI, and Visual Crossing offer programmatic access at low cost, with historical archives and forecast feeds that integrate cleanly with most modern data stacks.
The variables that matter for forecasting depend on the operation, and a short correlation analysis over a couple of years of historical data usually identifies them within a day or two of work. Useful candidates include daily mean temperature (and especially deviation from the seasonal norm rather than the absolute value), total precipitation, snow depth or a snow flag, peak wind speed, and the presence of an official severe-weather warning. Severe-weather warnings are particularly useful because they capture the anticipated impact rather than the meteorological measure — a Met Office amber warning for snow has a measurable effect on customer behaviour even if the actual snowfall is light, because customers are responding to the warning as much as the weather.
Building weather into a forecast
The journey from “ignoring weather” to “modelling it directly” is gradual, and the early steps are inexpensive. The cheapest first move is a manual weekly overlay. Each Friday afternoon the planner reviews the seven-day forecast, identifies any days that match a known sensitive condition (snow, storm, heatwave), and applies a manual uplift or downlift to the affected day’s volume and absence forecasts. That single habit closes the largest part of the weather gap for most operations, with no infrastructure investment beyond a five-minute meeting and a written log of what was applied and why.
The next step is a structured lookup table — for example: “Amber snow warning: +30% on motor claims, −10% on retail enquiries, +50% on day-of agent absence assumption.” Tables are crude but transparent, easy to maintain, easy to audit, and easy to defend in a finance conversation. Most planning teams that adopt them refine the numbers over a year of observations until the table reflects the operation’s actual sensitivity rather than the planner’s intuition.
Beyond that, weather variables can be incorporated into a regression model alongside the existing time-series baseline. A gradient-boosted model with weather features sitting on top of a Holt-Winters baseline typically outperforms either alone, and modern explainability tooling (SHAP values, feature-importance reports) makes the model defensible to stakeholders who want to know why the forecast moved.
The most mature step is real-time integration. A weather feed updating every few hours, an automatic alert when conditions change materially, and a re-forecast triggered by significant deviations from the morning’s plan turn weather from a planning input into an operational signal. Few operations need this level of integration, but those running large weather-sensitive queues — insurance, utilities, transport, energy — usually find it pays for itself within a single peak event.
Lead time, confidence, and the limits of weather forecasting
Weather forecasts have their own accuracy curve. A one- to three-day forecast is highly reliable. A five- to seven-day forecast is usable but degrades. A fourteen-day forecast is directional at best — useful for “is the next week likely warmer or colder than normal” but unreliable for specific-day planning. Beyond two weeks, climatology (historical norms for the time of year) is usually as good as any forecast. The planner who treats a fourteen-day weather forecast with the same confidence as a three-day one will introduce more error than they remove.
This means the right cadence for weather adjustments is short and rolling. A weekly review feeding into the next week’s plan, refreshed mid-week if conditions change materially, captures most of the value. A monthly weather adjustment based on a long-range forecast captures very little.
Common mistakes
Three patterns recur. The first is including weather only on the demand side and ignoring its effect on agent supply, which leads to forecasts that look better than reality at exactly the moment reality is hardest. The second is using absolute weather values instead of deviation from seasonal norms — a 22-degree day in July is unremarkable; in March it is unusual enough to shift behaviour. The third is treating weather as binary when it is continuous — snow is not on or off, the impact scales with depth, duration, and timing. Each of these is fixed cheaply once recognised; the problem is that operations rarely realise they are making them until a bad weather event exposes the gap.
The maturity curve
Most contact centre operations sit at one of five levels. At level one, weather is not in the forecast at all. At level two, the planner manually overlays adjustments for extreme days. At level three, weather review is a fixed weekly habit with a written log of impacts and accuracies. At level four, weather features are embedded in the forecasting model itself. At level five, real-time weather feeds drive intraday adjustments and severe-weather alerting. Most operations that take weather seriously sit at level three; the operations that genuinely need level four or five know who they are. The practical advice for everyone else is to move up one level at a time, measuring the accuracy improvement honestly along the way.
Conclusion
Weather is one of the few forecast inputs that is freely available, leading-indicator, and demonstrably valuable — and it is one of the most consistently under-used. The contact centres that take it seriously gain a meaningful and cheap advantage in forecast accuracy, agent wellbeing on bad weather days, and customer experience during the events that matter most. The first step is small — a five-minute Friday review, a written log, a willingness to treat weather as an input rather than an excuse — and it pays back faster than almost any other investment a planning team can make.
Pair this with the beginner’s guide to forecasting for the foundations a weather overlay sits on top of, or the guide to AI for forecasting for the technical path of including weather as a structured driver in the model.
Comments
Comments are powered by Giscus — sign in with GitHub to join the discussion.