Building a Simple Monte Carlo (at least that was the intention)

Building a Simple Monte Carlo (at least that was the intention)

 m

Monte Carlo simulations are often seen as an extremely complex tool reserved for programmers, stats nerds, and the kanban elite.

However, I’d argue that they are a lot more accessible than most people think. Furthermore, I’d suggest that using an off the shelf tool to run your Monte Carlos without a basic understanding of how they work can lead to confusion and misunderstandings.

In this post I’m going to walk you through how to create a Monte Carlo simulation to forecast the completion of a project. I’m going to use C#, however the same principal can be applied in any language – I’ve even seen it done in excel!

The question I want to answer, is when will I finish Dan Vacanti’s excellent book Actionable Agile Metrics for Predictability II? It’s somewhat embarrassing that I’ve not read it so far, I hope the fact I’m featuring it in my blog post goes some what to make up for that oversight.

To demonstrate how the forecasting method works I’m going to create the forecast as I’m starting this post, continue to read, and then reflect on that forecast once I finish the book. My hope is that this will show how the forecast is constructed but then examine how the future actually plays out. [EDIT MADE LATER]Why do projects never go the way you expect!?[/EDIT MADE LATER].

Prior to starting Dan’s book I was reading Facilitating Professional Scrum Teams by Patricia Kong, Glaudia Califano, and David Spinks. Seriously, it’s a good read I highly recommend it. This means I have some historical throughput data I can use to get me started, I know how many pages Dan’s book has so I can use this information to create a forecast. This is a similar approach to how we’d use work from our current project to forecast an upcoming one in real life.

Here’s my reading data from the 28th of March onwards.

Facilitating Professional Scrum Teams 258 pages

Shortly after finishing that book I started on AAMP2 which has 259 pages.

Excellent, this brings us up to today and now we have some data we can work with. Before I get any further I feel it’s worth highlighting a couple of points:

  • These are different books with different authors, writing styles, and potentially a different number of words on each page.
  • We had a long Easter weekend where I didn’t do much reading.
  • I was a little unwell.

These are all reasons why our historical data may not match our upcoming days perfectly and could make any forecasts less accurate (spoilers – there’s a huge irony that identifying this kind of signal to noise is exactly what Dan’s book turned out to be about!).

Let’s get forecasting. Fundamentally what a Monte Carlo simulation does is create a number of future timelines, like parallel universes from Start Trek. Each one of these would have events playing out in a slightly different way. Some would be a lot like our own universe, others would be very very different.

So for example, imagine that I’d written down the number of pages I’d read each day last week as the following:

2, 5, 0, 3, 2

That’s fixed, that’s not going to change. Now, imagine that I have 10 more pages to read. I would create one possible future of how many pages I’d read. I can do that my randomly selecting days from that sample pool and stop when I reached the goal of ten pages. This could result in:

2, 5, 0, 0, 3, 2

Ok, that’s one possible future. Now lets look at some others:

5, 5

0, 2, 2, 3, 5

Notice I can only draw out numbers which were contained in our sample data and they’re weighted according to how often they showed over the sample time period. This ensures that more common results are drawn more often and less likely ones less often. The power of the Monte Carlo is that by doing this many many times and considering lots of possible futures the less likely dates occur less frequently and the more likely dates are more common.

Let’s look at that in code. On the 18th of April, knowing I was unlikely to read any more that day I wrote the following script.

For those of you who aren’t developers I’ll walk through it for you.

  1. First we load in the data – done off screen, this could be done with a file load or as I did by just coping my page throughput data directly into an array. I’m using the last 20 days worth of data, this is generally a good number to choose but for more details see Dan Vacanti’s books.
  2. Calculate the number of pages remaining, in this case 178. Sure, some of those would be index and copyright information which I don’t typically read, but it’s a good figure to be aiming for.
  3. Do the following 2500 times (you could go for 10,000 any other big number – whatever you want, it’s diminishing returns).
    1. Move the date onto tomorrow – remember we’d already read today.
    2. While I’ve not finished the book
      1. Get a random value from our input data and add that to the number of read pages.
      2. Move the currentdate forward by one day
    3. Once the book is finished record the date we completed the book in the results variable
    4. Move onto the next iteration.

When I ran the script and saved the results to a file I got the following results. In this case the date we expect to finish and the confidence is the decimal representation of how many times it happened during our 2500 runs.

To make this a little more readable I usually add a third column to show the cumulative confidence.

When graphed this looks like this:

The blue bars are the individual probability that I’d finish reading the book on that day. The orange line is the cumulative probability that the book would be finished on or before that date.

This is extremely powerful in two ways. Firstly we can see that the date I am most likely to finish is the 8/5/2024. However, if you look at the table you can see that I am only 47.32% confident that I will finish the book on this date or before.

This is a topic which is described at great length elsewhere so I won’t dwell on it here. Suffice it to say that the mostly likely result is not always especially likely. If someone was asking me to place a bet on which day I’d finish then I’d likely venture the 8th of May. However, if I wanted to share a date I was confident that I could finish the book on or before the cumulative date is far more helpful. This is a much more likely scenario in real life.

Tradition (and risk appetite) dictates what value to use. Personally I generally give my forecasts with an 85% confidence. Therefore, if pressed for a date I was confident I could finish the book by I’d give the 15th of May.

Now, at this point I’m going to stop writing and continue reading. Lets see what happens!

insert time passes mirage here

I finished the book on the 30th of April. Firstly, wow Dan – what a great read. Highly recommended to anyone who hasn’t read it.

Here’s my reading data:

Great, so job done yes?

I knew that there was a risk in writing the post in this format that I’d suffer from the perils of a live demo and that seems to have struck. Strictly speaking I was successful, I did indeed finish the book before the 8th of May. However, I only gave myself a 2% chance of finishing on or before today. So what happened? Did I hit hit that one in fifty probability and get really lucky – or was something else going on?

In one of life’s wonderful irony’s this is one of the key topics which Dan’s book addresses. He describes several methods to understand whether you have a stable and predicable system and therefore whether it is suitable for a forecast, using a Monte Carlo or otherwise. To paraphrase Jurassic Park, did I get so caught up with whether I could forecast the end date using a Monte Carlo method that I didn’t especially stop to think about whether or not I should?

I could leave the post here, I’ve done the job I set out to do and shown how to create a simple Monte Carlo. However, I want to dive into this a little deeper. Despite only learning about this topic in the very book my forecast was about, I believe there’s value in exploring the pitfalls of the forecast – but also demonstrating that it’s ok to learn in public!

With that in mind I used the guidance in Dan’s book to create the following charts:

These are known as XmR charts and are intended to show the variation within my process. Without diving into too much detail they are used to help identify what’s routine variation (the normal differences of day to day reading) when compared with signals that something unpredictable is going on.

The red horizontal lines are based on the 20 previous days of reading up to and including the 18th (when I made the forecast). The large spikes crossing the top line show that several times something different happened on those days meaning I read more than was typical based on the normal day to day fluctuations. This shows me that the amount I was reading was extremely unpredictable, some days I’d read nothing, others I’d read several chapters.

All this variation makes it extremely hard to generate forecasts. After all, would you prefer to forecast for a team who deliver one or two pieces of work without fail or one who has delivered ten items once in two weeks but nothing the rest of the time? It’s that steady consistency and predictability which gives us confidence in our forecasts and that’s what we’re visualising here. Because there are multiple points where I’m breaching the 3 Sigma line there’s a good chance that this will be a bad forecast.

There are other clues too (again, for a full list I strongly recommend reading Dan’s book). In early April there were three days above the 2nd sigma line, from the 22nd to the 26th there were five points above the second sigma line, and lets not even talk about that big final day – I blame the index pages and a gripping finale. There are also some fairly major drop offs at the weekend. Moving onto the bottom chart there are two points which highlight a significant ramp up between two consecutive days – again, most likely the aforementioned weekends to Mondays.

Now that we’ve established that I wasn’t reading consistently and this was most likely undermining my forecast. I want to take action to try and stablise my process so that I can make a new and more useful forecast.

The first thing I can do is obvious, I can recalculate both my forecast and XmR charts based on the last 20 days. I mentioned earlier in the post that we’d had the Easter bank holidays and I’d been ill. The past 20 days are likely to be more representative of the next. The next thing I can do is have an awareness of those non-reading days and extra few pages are having on my predictability.
This makes intuitive sense, these are the equivalent of days off and extra overtime. While they have a short term impact and will help me finish the book. But they also introduce short term spikes which make us less predictable and make it harder to forecast longer term.

The next book on my backlog was a shorter one. At only 94 pages long I hoped it wouldn’t take too long. I’m also going to remove 8 pages of indexes and references. That’s 86 pages to read.

My new Monte Carlo tells me that I have an 85% confidence of finishing the book on or before the 11th of May. However the most likely dates are around the 8th.

My recalculated XmR charts tell me that more than 38 pages in a single day signals that I’ve had a change in reading routine. As would two consecutive days over 25, or four days in a row over 12.

With this revised model of my reading habits I’m going to carry on, read about Communities of Practice, and carry on with this post in about a week’s time.

more time passes

I finished the book on the 8th of May (does that sound like I fixed it to hit the date exactly? It’s never perfect – I did actually finish on the 8th!).

Here’s my reading data:

Now, I’m not suggesting for one moment that consciously avoiding over reading on particular days and rerunning a forecast moved me from a 1:50 likelihood to the most likely date. There were likely other factors at play such as the shortness of the book.
However, it does highlight a very important aspect of forecasting. If you read 10 pages every single day without fail then you can guarantee that you’ll read 70 in a week, 3650 in a year and so on. The same applies to our real life projects. If we invest in predictability through all the best practices of Kanban including managing WIP limits, Rightsizing, and Finishing the oldest work first then not only will this improve throughput – but it will have a significant impact on our ability to forecast.

With better forecasts comes better cost and risk management and more profitable endeavors.

Taking a few minutes to wrap this all up. I showed step by step how to create a Monte Carlo simulation. I used it to forecast how long it would take me to complete a book based on my previous reading history and how many pages there were. That forecast was pretty poor. Although I finished early I completed it on a date I’d forecasted with only 2% probability. Subsequently (and now being aware of these techniques I’d strongly advise doing these ahead of any forecast), I looked at the variation of my process. This highlighted that I was reading very erratically with very little some days and big bursts on others. This at least partially contributed to the poor forecast. When I took action to try to improve my predictability my forecasts became more accurate.

If you want to underline this with one sentence. The more predictable you are at completing work the more likely it is that your forecasts will be valuable, so invest time understanding how to make your team more predictable.