Part 3: Estimating the Benefit Categories
Now that we’ve established a set of benefit categories and their components, it’s time to discuss how we can estimate the components of this breakdown. As I said in the opening post of this series, it may seem that some areas (e.g. improved decision making) are too intangible to measure. But we’ll borrow from Hubbard’s excellent book “How to measure anything”, and pose the question – why do we even care about improved decision making? By asking what end effects we hope to see, we can derive estimates.
With that in mind, let’s go through the structure from the last post and look at ways to put a value on each of the components.
Risks have two components that make up their magnitude; the probability of a risk occurring, and the likely magnitude of that risk. General practice is to multiply the likely loss by the probability of loss to derive an estimate. So, to estimate the benefits from an impact in risk, you can estimate the change in probability and the change in impact, derive a new value and compare to the old value to estimate the risk reduction benefit. Below, some notes on specific cases.
A way to estimate this is to look at other organizations in similar situations that have been judged in non-compliance; the costs that resulted; and how often this has occurred in the past 5-10 years.
Let’s imagine that on average 2% of audits were failed by organizations, with an average cost of $1million. Then the risk has a magnitude of $20,000. If we can halve the probability of failure, the model has reduced the magnitude from $20,000 to $10,000, producing a value of $10,000.
A way to estimate this is to look at other organizations in similar situations that have been judged in non; the costs of the breach; and how often this has occurred in the past 5-10 years.
Let’s imagine that on average 5% of organizations had a security breach, with an average cost of $10 million. Then the risk has a magnitude of $500,000. If we can reduce the risk to 4%, we reduce the magnitude to $400,000 – a value of $100,000.
Here we can look at occasions where we have had to invoke DR and restore from backup, and how much work was lost, in the past 3-5 years. Say that this happens once a year, losing half a day’s work for 200 employees. Then the yearly loss is $100*4*200 = $80,000. If better modeling can reduce the downtime or the chance of DR being invoked by 10%, then the value the architecture model adds here is $8,000.
Operating Cost Reduction
Here we need to estimate the amount that we can save by retiring applications. Let’s assume that we spend $500million a year on application maintenance (not a far-fetched assumption, I know a few organizations that spend 60% or more of their IT budget on application maintenance). So if we have 250 applications and hope to retire 1 extra application due to analysis, we could make a simple-minded savings estimate of $1million.
Reduced System Downtime.
Here we estimate the benefit by estimating the reduction in downtime and the value of that reduction.
Say with 20,000 employees with an average hourly cost of $50 lose half an hour each per year on average – this works out to $100,000 a year in outage costs. If having a model reduces this figure by 10% then we have a saving of $10,000.
Here it is best to identify specific known issues that the model can address.
For example, say, network congestion is a known problem that costs every member of staff 30 minutes a year, then if the average staff cost is $50 an hour then a company with 20,000 employees is losing .5*50*20,000 in productivity = $500,000. If a model can reduce this by 5%, then the payoff of the model in this area becomes $25,000.
Faster analysis and design.
We can estimate this area by taking the average hourly rate for an architect and polling a sample of them for how long they felt the manual nature performing analysis added to a project on average. Then multiply the two together. Let’s give an hourly cost of $100 for an architect, and say that each architect works on 10 projects a year. If they can save 4 hours on each project, then this gives a saving of 4*10*$100 = $4000 per architect, or $80,000 a year in a team of 20 architects.
Reduced time to market.
To estimate the value of faster time to market, we need to take the average monthly ROI of new initiatives that have required the support of the IT department.
For example, let’s assume that 6 new business initiatives involve the IT department each year, and each realizes $600,000 extra revenue each year. Then, on average, these initiatives realize $3.6million a year, or $300,000 a month. If on average a new initiative can come to market 1 week earlier due to the better co-ordination between IT and the business, this offers a value to the company of $75,000 each year.
We can estimate the value of improved innovation by looking at past initiatives that did not take place and now will. Say on average 1 extra initiative will be possible every twelve years, and the earn-out period of an initiative is 2 years. Then the value of this extra initiative, using the $600,000 annual value mentioned previously, comes to $600,000*2*1/1 = $100,000.
Improved Decision Making
We can look at past projects, and see what the total cost overrun was across all projects in the previous financial year. If we want to be rigorous, we can even average the data over several years. Then we’ll have to guesstimate, based on experience or talking to project managers, how much more accurate estimates would be with a clear picture of dependencies – say it reduces an average overrun from 15% to 12.5%, on projects that were budgeted $300Million and cost $345million in total last year. So a reduction in error rate from 15% to 12.5% would have reduced the overrun from $345million to $337.5million, saving $7.5million.
Using reference architectures can save several hours analysis on each new project. If we assume that each of 200 projects has 2 hours reduced architecture time at a rate of $100, the payoff becomes $100*2*200 = $40,000.
Here the benefit comes from the reduction in time to plan out capacity and availability, and reduction in time spent to respond to unexpected demands. If we assume that it saves 10 staff a total of 10 hours each per year at an hourly cost of $100, the value becomes 10*10*100 =$10,000.
Here our value comes in the fact that a transformation can take place faster; if analysis is required by a transformation, it could take place as needed – just as with new business initiatives, the value is in time to implement. The issue is that unless one is specifically planning, estimating the value and the speed becomes a question of ‘how long is a piece of string’?
So if this element is going to be used in the business case, the modeling team will need to know of and be able to reference a transformation that is either planned or under consideration. For example, if a move to cloud-based operation could save $5.2million annually, then using a model to enable that move 2 weeks faster has a value of $5.2M* 2/52 = $200,000.
Culture of co-operation.
An improved culture of co-operation could reduce staff turnover and produce new initiatives. We’ll estimate each separately.
A recruiter will often charge 3 months’ salary for a new candidate. So to hire a new professional at $120,000 would cost $30,000. If an improved culture gives a 50% reduction in the chance that one less person will leave in any given year, then the value of this becomes 50% * $30,000 = $15,000.
To estimate the value of possible new initiatives, we take the average value of a new initiative (say $600k annually, from the analysis above) and multiply it by the chance that a new initiative is identified – say 2%. Then the value becomes .01*600,000*3=$18,000.