Estimating fixed-outcome projects has always been a challenge for many organisations; the reality is we are not very good at it.
Single point estimates
The approach taken by many teams starts off well. They decompose the work into small chunks to better understand how to complete the outcome and then they estimate the chunks, and sum them up.
However, if a single decomposed task overruns [and let's face it, they often do] then the whole project overruns. Teams mitigate this risk by simply adding a contingency buffer on top; 10-30% is typical.
If I ask that team how confident are they that they can complete the project within the estimate and contingency I might can answers such as “pretty confident” or “yeah mostly confident”. It’s hardly evidence-based management!
The more experienced project management-driven teams might take a more sophisticated approach. They will use a 3-point estimating approach, which in the project management world is known as PERT (**P**roject **E**valuation **R**eview **T**echnique).
PERT starts off the same as the single-point approach; we decompose the work into smaller activities. However, this time we give three estimates for each activity.
A **minimum** estimate: What if this is straightforward, or has minimal risk associated with it?
An **expected** estimate: Based on past experience how long does this typically take accounting for some small unknowns that we’ll need to deal with.
A **maximum** estimate: This is our Murphy’s law estimate [what if everything that could go wrong does go wrong], what if this is really risky, or we just don’t understand enough yet to give a concise expected estimate.
We then calculate a weighted average these values.
The PERT estimate is
> (Minimum + (4x Expected) + Maximum) / 6
While not sophisticated, it does consider things going wrong and the larger our maximum, the more we skew away from the expected value. Thus we get an element of contingency built into our numbers.
However we still cannot provide confidence statements like “I’m 80% confident we can deliver this outcome at this budget / estimate.”
Probability theory and simulation
Ok, firstly this section includes some simple maths. I’ll try and make this clear, but if you want to skip over this, you can just trust that simulation works [The rocket scientists at NASA do] and skip to the next section.
Most readers are probably familiar with a normal distribution curve. This shows that a given outcome or expectation will fall between a lower bound, an upper bound and an average. Think of the height of people, household income, or time to complete a task.
If we ran our project hundreds or thousands of times [^ - This is Monte Carlo Simulation. Using probability theory to predict outcomes based on known values of average and variance.] in multiple parallel universes we would see the same observation; Sometimes it would run well, and be at the lower bound, sometimes it would be terrible and everything would go wrong, but mostly it would be around some kind of average. (Trust me - it’s true but proof of probability theory is beyond this blog). Furthermore, that same probability theory tells us that 99.7% of our outcomes [for practical purposes all of them] would fall between the average and +/- 3 standard deviations.
Standard deviation is simply a measure of variance in data. It is calculated as the square root of the sum of the variances of each data point. You don’t need to know how to calculate this as Excel, given a range of data, will do it for you with the STDEV.P() formula.
HINT: Given 99.7% of our population (i.e. almost all of it) falls between +/- 3 standard deviations we can perform a quick hack. Standard Deviation = (Max - Min)/ 6
Using probability theory
From our normal distribution curve it should be clear now why single point estimates (which are usually the ‘average’) are no good. In our parallel universes only 50% of our projects came in at the average or less (left of the average in our distribution). In single point estimating, when asked the question “how confident are you?” you should never answer more than 50%.
PERT is a great starting point for simulation. We have a range of values and from them we can calculate an average and standard deviation. The simulation part is then using Excel to model a normal distribution curve for these values, and re-calculating across hundreds or thousands of cells.
Here is a small simulation sample from an Excel sheet I use based on the estimates in the above images. [I use 5000 simulations, but hide a large number of the simulation cells as this speeds up the process as Excel doesn’t have to repaint all of these values everytime we calculate the sheet)
The ‘magic’ is in the simulations cells coloured orange which have a formula of the form.
> =NORM.INV(RAND(), $F4, $G4)
> This returns a random value from the probability curve with a mean from $F4 and a standard deviation from $G4
> As you can see from the image, that values are indeed maximum and minimum for each tasks estimate.
My sheet then sums up all of the task estimates to get a project estimate for each simulation.
The management view
The simulations view alone doesn’t provide valuable conclusions; it is simply a pool of data of possible project outcomes. For conclusions and evidence-based approaches I summarise these simulations into useful data.
Let’s distil what we see here.
1. First and foremost is the distribution curve of our simulations. The heart of our model. This shows the frequency of outcomes from our simulation that fell within certain project cost buckets. We can then overlay a cumulative distribution curve over the top of this. In this example, we can see that 80% of our simulated outcomes cost $336,400 or less.
2. Our confidence levels provide simple percentile calculations based on all our outcomes. In this case I want to be 70% certain I can deliver the project within budget. This tells me my estimate should be $329,983.
3. However I also want to calculate a proper evidence-based contingency. For this I say I want to be 95% certain, and the sheet tells we the budget at 95% and shows the difference between the two values.
4. We get some comparison metrics. The best simulation ($217k), The worst ($416k), and a PERT based cost ($277k)
I can now have an evidence-based conversation of the form. “I’m 70% certain I can complete this project with a budget of $329,983. Furthermore, with an allocated contingency of $29,490 my confidence increases to 95%”.
You may ask why not just say you want to be 100% certain? You could, but what you are saying is that you expect everything to go wrong. In this example, 100% of the outcomes cost $416,864 which is 25% more budget than my highly probable 70% outcome. At this budget, management *might* not justify the business case and not green light the project, or if you as a vendor organisation, you might price yourself out of the work.
Use in an Agile context
Simulation can be extended into an Agile context. We could simulate completion dates of releases, features or epics by modelling based on variables such as
Team velocity (or flow of completed work if we are #noEstimates).
Rate of arrival of new work.
Unplanned and remediation work.
As long as we have some historic data we can calculate an average, a standard deviation and then use these in the excel NORM.INV(RAND(), mean, stdDev) formula.
A final word of warning
As with all models, simulation will suffer from garbage in, garbage out. If our 3-point estimates are way off the mark, then our final project estimate will be junk. Don’t expect good simulation to solve this.