Building a business case for an Architecture Model (part 4)


 Part 4: Components of the Cost Categories

With the benefit categories established, we can now move on to getting an estimation of how much this is going to cost us. We’ve got 5 established areas when it comes to components of the cost categories.

Data Gathering

Data Import. Often, some information that will be relevant to the architecture model will already exist. For example, it’s rare that I encounter an organization that does not have some catalog of its applications, servers and a mapping between them. Likewise, there may be a list of business processes held in a workflow tool somewhere. It makes sense to feed this information into the architecture model, but doing so will likely involve some level of Extract, Transform and Load operations.

Import and conversion of existing diagrams. It’s entirely possible that some aspects of an architecture model already exist, not in information stores as considered above, but in diagrams that have been created by various initiatives. This could be a business functional decomposition for the entire organization. It could be the deployment architecture for the SAP implementation at the organization. It could be a schematic of the top-level network architecture. Regardless, this information will likely not be following the standard adopted for the overall model, so some conversion work will be needed.

Data Discovery. Depending on the goals of the architecture model and history, it’s entirely possible that no data will exist and it will have to be created. This is a classic ‘interview, document and review’ exercise – but it imposes a cost in terms of time spent.

Data Rationalization

Data cleaning. Importing existing data is a sensible activity, but it may be that some of the data is wrong. For example, it may be that the list of servers includes many servers that have been decommissioned because people don’t remove the entries when the server is decommissioned. So there may well be an exercise required to clear out inaccurate data.

Naming conventions. I once worked on a project where the business analysts imported their existing Visio diagrams into our tool, and they immediately found a problem in analysis and reuse. Specifically, naming conventions. For example, in various process models the procurement system was called “Ariba”, “Ariba eTrax”, “eTrax procurement system”, “the procurement system”. The issue was that, in the absence of any guidelines, everyone made their best guess. So, it’s necessary to spend time on naming conventions to enable modelers to find shared objects and reuse them.

Resolving overlaps. It’s helpful that existing information will usually exist, as described above. However, the catch is when two data sources disagree. They might have different naming conventions, data in one might be less fresh than the other, and so on. The whole topic of data consolidation is one that is well addressed elsewhere, and is too large a subject to discuss here – but it is an activity that may need to be costed out.

Communication and rollout

Obtaining buy in. It’s a simple fact that many architecture efforts fail. Either they never even get traction, or they get subtly undermined when a department refuses to participate. So, even when the architectural effort has the backing of senior management, it’s necessary to engage with all the stakeholders for the effort, and address their concerns.

Establishing Processes. With multiple people working on a shared model, there needs be a standard way for them to co-ordinate between themselves. So implementing a shared architecture model naturally means creating and rolling out processes for this co-ordination; all of which requires engagement with those employees who will be using these processes.

Training. Creating a shared understanding of an architecture necessarily means adopting certain standards. The biggest of these is adopting a modeling standard. There are various metamodels available, some organizations create their own (or their own extensions to existing ones). But to expect modelers to use the metamodel effectively, does require that they gain instruction in how to do so.

A second area of training is on the various standards and processes that need to be established in the organization.


Tools. Now, it’s theoretically possible to create and maintain an architecture model without using any tool. You can create the diagrams in Visio, maintain the underlying information in Excel, and so on. And if your model is extremely basic, this may even be practical. But once you reach a certain critical mass of information, the costs of maintaining the model (represented in our structure here as the governance overhead) become prohibitive. So, a tool is often adopted at some point.

Tool consultancy. Most architecture tool vendors will offer some level of consulting around the adoption of their tool, beyond the basic tool training. Often this is targeted at addressing some of the other cost areas listed here.

Governance overhead

Architecture review. Having a shared model implies a level of governance around when projects are allowed to go live. This is often a small item, in that many organizations have some level of architecture review already – even if it is simply a directive to ‘go talk to the domain experts for areas X, Y and Z and make sure there’s not a problem. So while governance is often seen as a cost that is imposed, if it is a real cost it’s often a small incremental cost.

Updating the model. If a model is not to become shelf ware, there needs to be a process around how projects update it when they go live. Executing this process, and updating the current state will naturally involve a level of extra effort.

So at this point, we’ve considered key components of our cost areas – and in the next post, we’ll be looking at how to estimate them.