top of page

When Forecast Accuracy Isn’t the Problem

  • Writer: Paul Edwick
    Paul Edwick
  • Jan 3
  • 4 min read

Updated: Jan 12

Most finance teams aren’t uncomfortable because their forecasts are wrong.

They’re uncomfortable because the numbers in front of them only feel reliable while everything behaves as expected. The totals add up. The logic holds. Yet confidence is conditional — dependent on customers paying on time, growth arriving when planned, and new revenue materialising without friction.


That unease comes from what forecasts often struggle to make visible. Very different customer behaviours sit underneath the same sales line. Sometimes they’re blended together at too high a level; sometimes they’re buried under layers of detail. In both cases, the number looks tidy, but the assumptions doing the work inside it are hard to see.


When leaders or investors hesitate, it’s not because they want more precision. It’s because they want to understand what is actually carrying the forecast — which parts are dependable, which are hopeful, and which would unravel first if conditions shift.


This piece looks at revenue from that external perspective: not how forecasts are built, but what information people look for when deciding whether a forecast deserves to be trusted.


Why the Sales Line Is Where Confidence Breaks First


Revenue attracts scrutiny because it is the least mechanical line in the forecast.


Costs tend to move according to contracts, headcount plans, or known commitments. Revenue depends on customers doing what you expect them to do — buying, renewing, expanding, and paying — often under conditions you don’t fully control.


This is why sales numbers can feel reassuring and unsettling at the same time. The outcome may be plausible, but it rests on behaviours that are uneven by nature. When those behaviours diverge, the forecast doesn’t fail mathematically; it fails under scrutiny.


From the outside, this is where confidence starts to wobble.


What an External Reader Is Really Looking For


An investor, board member, or lender doesn’t approach a forecast asking whether the arithmetic works. They assume it does. What they’re trying to understand is what is actually driving the number — and how fragile those drivers might be.


They instinctively look for separation between a small number of forces, even if those forces aren’t explicitly shown.


In practice, the sales line is often doing the work of several very different assumptions at once:


  • Existing customer behaviour


    Revenue that depends on customers continuing to buy and pay broadly as they have before. Usually the most dependable element — until behaviour shifts.


  • Growth assumptions


    Increases in volume, usage, or customer count. Often credible in direction, but highly sensitive to timing and execution.


  • Price and inflation effects


    Uplifts driven by pricing changes, indexation, FX, or mix. Frequently assumed, rarely stress-tested.


  • New products or services


    Revenue that has not yet proven itself. High potential, high uncertainty, and often over-weighted in forecasts.


  • New channels or markets


    Expansion stories that depend on ramp-up time, learning curves, and sustained focus.


None of these assumptions are unreasonable. The challenge is how they are presented.


Where Clarity Is Lost


Loss of confidence does not come from a single forecasting mistake. It emerges in two common — and opposing — ways.


In some organisations, assumptions are blended together at too high a level. Very different customer behaviours are averaged into a single, smooth outcome. The total looks stable, but fragility is hidden.


In others, the opposite occurs. Forecasts become so granular that the primary assumptions are obscured by volume. Detail accumulates, but it becomes harder to answer simple questions about what really matters: which customers drive the base, which growth is dependable, and which revenue is still aspirational.


In both cases, the effect is the same. Whether assumptions are averaged away or buried in detail, the people reading the forecast struggle to see where risk actually sits.


Why Confidence Erodes Before Accuracy Is Tested


When revenue drivers are hard to distinguish, discussion shifts away from business reality and toward defence.


Forecast reviews slow down. Time is spent explaining methodology, reconciling views, or clarifying what has changed since last time. Ownership becomes blurred. When the number moves, it’s unclear whether the cause was customer behaviour, pricing, growth assumptions, or something new failing to materialise.


This is where explanation overhead tends to increase. Decisions are deferred, not because information is missing, but because confidence is conditional.

From an external perspective, this is where apparent stability masks underlying fragility: everything adjusts itself neatly — until it doesn’t.


What Builds Confidence Before Accuracy Ever Comes Into Play


Most of this discomfort appears long before accuracy is proven or disproven.


Confidence begins to form when readers can see, without excavation:


  • Which parts of revenue are anchored in demonstrated customer behaviour

  • Which parts depend on execution going right

  • Which parts are new, unproven, or timing-sensitive


They are not asking for more detail. They are asking for clearer separation of belief.


When that separation is missing, even an accurate forecast feels brittle. When it is present, imperfections are tolerated — because the risk is visible and intelligible.


That distinction explains why so many forecasting debates are not really about whether the number is right, but whether it is safe to rely on.


Conclusion


Forecasts lose credibility long before they miss their numbers. They lose it when the assumptions that matter most — customer behaviour, growth, pricing, and new revenue — are either blended together at too high a level or obscured by excessive detail.


From the outside, accuracy is secondary to visibility: the ability to see which parts of revenue are dependable, which are exposed, and which would fail first if conditions shift.


Until that separation is clear, even a well-built forecast feels fragile. That’s why forecast accuracy so often isn’t the problem — confidence is.

 


bottom of page