Forecasting a new operation: when you have no history
The case the standard methods don’t cover
Every textbook forecasting method assumes the planner has at least a couple of years of clean historical data. Exponential smoothing needs a level and a trend; ARIMA needs autocorrelation structure; Prophet needs enough cycles to fit seasonality. None of them helps when the operation has no history at all — a new queue, a new product, a new contract, a new contact channel. Yet new operations are forecast all the time, and the forecasts they produce drive headcount decisions before any actuals exist to validate them. This article walks through how to forecast when you have nothing to extrapolate from, the data sources that fill the gap, the deliberate over-staffing pattern that protects the operation in the early weeks, and the recalibration discipline that turns the no-history forecast into a real one as actuals arrive.
The four sources of insight when you have no data
A new-operation forecast is assembled from four sources. Analogue queues — existing operations doing similar work for similar customers. The new queue may not have history, but the operation’s billing line probably does, and its arrival pattern is a reasonable starting point. Vendor or market benchmarks — if the operation is a contracted service, the BPO or solution provider has comparable contracts whose patterns they can share. Top-down business assumptions — the business case behind the new operation usually includes a customer count, a contact-rate-per-customer assumption, and a product-mix assumption. Multiply them through and you get a top-line volume. Process knowledge — the people designing the customer journey know which steps will route to contact and which won’t. A walk-through with product, operations, and design produces estimates of which call types will appear and roughly how often.
None of these sources is reliable on its own. Together, they triangulate a forecast that is good enough to plan against, with explicit uncertainty bands.
Building the first forecast
A workable first forecast follows a four-step build. The first step is the top-line annual volume, usually as a range rather than a point. If the business case predicts 50,000 customers and the analogue queue runs at 0.8 contacts per customer per year, the annual volume is around 40,000, but the range should be 30,000 to 55,000 to reflect the genuine uncertainty. The second step is the weekly profile, taken from the analogue queue or a benchmark with reasoning about why the new queue might differ. The third step is the intraday profile, similarly anchored. The fourth step is the build-up curve — the new operation rarely launches at full volume; volumes typically ramp over the first 8–16 weeks as customers and awareness accumulate.
The result is a forecast that is internally consistent, traceable to its inputs, and explicitly uncertain. It is not a single number; it is a range with a most-likely estimate.
Deliberate over-staffing in the early weeks
Because the early forecast is uncertain in both directions, deliberate over-staffing is the right response for the first 4–8 weeks of the operation. The cost of being short on day one is acute — SL breaches, customer complaints, demoralised agents, contractual penalties — while the cost of being over-staffed is recoverable: the operation absorbs the cost for a few weeks, learns the actuals, and rebalances. A 15–25% buffer is typical. Operations that try to launch at exactly forecast capacity almost always undershoot in the first month and spend the next quarter recovering.
Build the buffer transparently. The capacity plan should show the planned headcount, the forecast demand at the most-likely estimate, and the buffer explicitly. Senior management can then make an informed choice about how much insurance they want to buy.
The recalibration discipline
The first eight weeks of actuals are worth more than the next twelve months will produce in any conventional sense. The volume, AHT, and shrinkage that emerge are real data points from a small population, and the planning function’s job is to convert them into a refreshed forecast as fast as possible without over-fitting noise.
A workable cadence: refresh the forecast weekly for the first 8 weeks, fortnightly for the next 8, monthly thereafter. Each refresh blends the new actuals with the original analogue-based assumption, weighting actuals more heavily as more data accumulates. By week 12 the analogue assumption usually has only a residual role; the forecast is mostly its own data. By week 26 it is a normal forecast.
Tracking accuracy honestly
An early forecast that is wrong is not a failure. An early forecast that pretends to be more accurate than it is, is. Publish the forecast as a range, track accuracy against the most-likely estimate, and report the trend in MAPE explicitly over the first quarter. Senior managers who saw the range up front will tolerate weekly variance better than senior managers who saw a single number and now feel surprised. The accuracy improves rapidly — from perhaps 25% MAPE in week one to 15% by week four to 8% by week twelve — and showing that improvement curve builds trust faster than any other communication.
Common mistakes
Three patterns recur. Building the forecast as a single point. A single number invites false confidence and produces awkward variance conversations later. Anchoring on the business case without challenge. The business case’s customer count and contact-rate assumptions are themselves forecasts, often optimistic. Stress-test them. Failing to recalibrate fast enough. An operation that runs its first forecast for the full year before refreshing has wasted the most valuable learning the new operation produces. The discipline is in the cadence as much as the method.
Conclusion
Forecasting an operation with no history is unavoidable; pretending you can’t do it is not an option. The discipline is in the inputs (analogue queues, benchmarks, business assumptions, process knowledge), the framing (a range, not a number), the deliberate buffer in the early weeks, and the recalibration cadence that converts uncertainty into evidence as actuals arrive. Operations that do this well launch calmly, recover from inevitable surprises quickly, and have a credible forecast within a quarter. Operations that don’t launch with a point estimate that misses by a wide margin, spend three months explaining the variance, and damage planning credibility for the rest of the contract.
Pair this with forecast accuracy metrics that matter and capacity planning for the next 12 months.
Comments
Comments are powered by Giscus — sign in with GitHub to join the discussion.