Project Estimates are Always Best Case

When we’re about to start a project, or build a major new feature in an app, we break it down into tasks, and estimate how long they will take. But we end up always exceeding the estimates.

Some people say that software teams are bad at estimating. One argument tries to explain that by saying software is intangible, so you don’t know what remains to be done. Unlike touring an under-construction airport, where you can see that the runway is built but not the check-in counters or escalators.

Others point out that schedule and cost overrun affects not just tech but all domains, even airports and metros, which have been done hundreds of times before. A new airport will practically be the same as any other one — they all have check-in counters, security check, immigration, restaurants and shops, gates and a runway. It’s not as if you’re going to build an airport with a river instead of a runway. Why, then, do even such projects exceed time and cost estimates? And not by a reasonable amount like 20%, but like 2x, at times.

I think I’ve finally understood why.

You really know what a task involves only once you get into it, unless you’ve done the exact same task a dozen times, like a Swiggy delivery guy, who can estimate well how long a delivery will take, because he’s delivered dozens of packages from Indiranagar to Ramamurthy Nagar, say. With creative work, we don’t do the same task in exactly the same way again and again. If that happens, we automate it by building a component that we reuse, like a JSON parser. If every project required us to build a JSON parser from scratch, we’d get good at estimating how long that will take, because we’ve done that multiple times earlier. But we don’t do that; we do something new every time, so we can’t estimate.

Even with more routine and constrained tasks like building an airport, if you’re the person in charge, unless you’ve built several airports, the fact that other people have built hundreds of airports doesn’t help you make a reasonable estimate.

Only after you get into a task do you partially understand what it takes, and only when you complete it do you fully understand what it took. As the saying goes, the only complete specification of what a software program should do is the program itself. When you estimate ahead of time, you’re actually making a best-case estimate, considering only the few aspects you know, assuming there are no surprises. You can’t account for these, by definition. They’re unknown unknowns.

This leads us to a conclusion: Software estimates are always best-case.

So what? What can we do differently once we accept this?

First, don’t be surprised when tasks invariably take longer than estimated. Accept that as a reality rather than considering it a shortcoming in yourself, even if (product) managers try to make you feel that way. A lot of stress in our lives comes from not understanding or accepting the reality of what you’re working with. A manager many years back told me that estimates should be more precise and that I should work on it, without offering any suggestion how. This is not useful — saying estimates should be more precise doesn’t make them more precise, any more than saying my car should pollute less magically makes it pollute less. If you do nothing to solve a problem, don’t then criticise yourself for having that problem.

Second, if you’re in a position of authority over others, keep in mind that estimates can’t be contracts. Don’t misuse them as a stick to hit people with. That would make you a pointy-haired boss who doesn’t understand the nature of the work but makes specific demands about it.

Third, sometimes people say you should under-promise and over-deliver. I get where they’re coming from, and it’s a good goal to aspire towards [1]. But we’ll never be able to reach that goal, since it requires knowing what the worst case is. If you’re able to accurately estimate that a task will take between 1–2 weeks, you could tell people it will take 2 weeks, and always over-deliver. But in reality you don’t know what the upper bound is. It might as well be 10x as much as best case. But if you go around telling people it will take 10 weeks, people will conclude you’re clueless. Too high a variability in your estimate, like 1–10 weeks, makes your estimate useless to do anything with. So the only solution is to give a best case estimate, and not criticise yourself for under-delivering.

Fourth, acknowledging that your estimates are best-case doesn’t mean you should stop estimating. Or that you should mention the first number that comes to your mind. You should spend 30 seconds [2] to estimate each task, breaking down the sub-tasks involved, how long each takes and add them up to come up with something better than a wild guess.

Fifth, since estimates are always best-case, say no to features till they prove their value, because otherwise you’re saying yes to an unknown amount of work. This will cause the schedule to slip. Or you’ll have to remove these tasks from the schedule partyway into the project, thus causing churn.

Sixth, uncertainty makes prioritisation critical. If an important task is put lower down in the order of tasks to do, it may not get done in a long time. And you don’t know when, because of the uncertainty, so you can’t even plan for it. Suppose you say, “X is important, but is needed only a month from now, and it should take only a week, so let’s work on other tasks for the first three weeks”. The other tasks may end up taking five weeks, causing you to miss your goal of launching X a month from now. If you realise three weeks in that the other tasks are not done, and stop working on them, it’s an overhead for people to switch gears, which causes less work done at the end of the quarter, and ultimately frustrates people if it keeps happening regularly. Don’t be the pointy-haired PM who demands more output while slowing the team down with avoidable overhead like context-switching. Another possibility is that the other tasks are indeed done in the estimated three weeks, but X takes two weeks, causing you to miss your timeline. What I’d do in this scenario, if X is the most important goal, is to do X first, even if it’s delivered ahead of schedule. As problems go, that’s a good problem to have. Don’t try to over-optimise your schedule by packing things in tightly, because that will fail.

Seventh, try to decouple development and launches. For example, at NoctaCam, we launch every 10 days, no matter how much work has been done [3]. We don’t say, “Feature X is done this fortnight, but feature Y will be done next fortnight, so let’s launch both at once.” Because Y may take a month. Or a month and a half. Which means that X may not launch for two months. Are you okay with that? If not, let X launch by itself. This may be because the feature needs to launch soon, or because slow feedback cycles prevent improvement and demotivate people. Launching every 10 days [4] prevents these problems. In a world of uncertainty, where you can’t plan, you need to keep things flexible and decoupled, as opposed to making a rigid plan and trying to pack everything together precisely, which won’t work.

Think about what else you would do once you accept the reality that estimates are always best-case.

[1] If you estimate that something will take 2–3 hours, say “3 hours”.

Or if I estimate something has four sub-tasks each of which will take an hour, I may feel uncomfortable. When I stop and try to figure out why I feel that way, it may be because of my implicit assumption that I’ve identified all the sub-tasks. Or that all tasks will go according to plan. In such cases, I pad the task, say by another hour, giving an estimate of 5 hours rather than 4.

Some simple things like these will help you estimate better.

[2] A little goes a long way. It’s worth spending 30 seconds estimating a task that will take an hour or more. Even if that hour will be spent by someone under you, and the 30 seconds will be spent by you, the founder, whose time is more important.

[3] We use the iOS app store’s phased release, which takes a week. Whenever a phased release completes, we launch, as long as there’s something either for the user (like a feature) or for us (like analytics or optimisations to get more people to pay). If no improvement has been made since the last launch, we, of course delay the launch.

[4] That’s just one example of a process designed with the acknowledgement of the reality that estimates are best-case. Another might be: If we haven’t launched in two weeks, anyone who’s implemented a feature they want live is empowered to call for a launch, without having to get into a team meeting spending a significant amount of time trying to justify why it’s important.

Consulting CTO. Earlier: Google | Founder | CTO | Advisor