Blog

enterprise-architecture

Creating a Tool Implementation Project Plan: Defining Scope Part 2

2015-08-26-main

I vividly remember the time I tried to introduce road mapping to a startup that I worked at in Canada. We asked the customer-facing staff to rate the items that we wanted to introduce on a scale of high, medium and low, so that we could focus on the items that customers were asking for. My older self is still rolling his eyes at the naivety of my younger self – yep, you guessed it, everything came back as top priority.

Summary: It’s one thing to identify what you would like to achieve when rolling out a tool, it’s another thing to be able to fit all of the nice to haves into the bucket of time and resource that is available. In this post I examine the factors that drive the effort involved for each of the items listed in part one, which is needed to be able to prioritize them.

Identify Organizational structure

The first task, identifying the organizational structure that will be using the tool, is, thankfully, a straightforward one – it should simply be a case of identifying the existing teams and what level of access they will need to models.

Identify Governance Model

Experience has taught me that identifying and agreeing the governance model for models is an area that can be easy, taking only a single one-hour workshop… or it can take 8 weeks of discussions. What decides whether this activity is a trivial exercise or a major roadblock rests on two factors:

  • What is the current level of co-ordination between groups? At one extreme, different groups within the organization may already co-ordinate extensively, using manual processes. At the other extreme, different groups might exist in different time zones, communicate mainly through email, and operate largely independently with widely different processes.
  • Is there a single leader who is in a position to mandate a solution? Input from stakeholders is necessary, but without someone who is empowered to keep discussions on track (and impose a decision if necessary), there is a real risk of circular discussions.

Agreeing the governance model is one of the three main risk items in a tool implementation.

Identify Reference Models

Identification of the reference models that the organization wishes to use will generally be straightforward; generic frameworks such as TOGAF and ITIL are well-known, while the organization will often be a member of industry groups that will assist in identifying industry-specific frameworks. Therefore, while this is a necessary step, it is not a significant one.

Define Repository Structure

How much effort is involved in definition of the repository structure will naturally depend on the tool, but it is unlikely to take more than a day’s effort.

Implement Reference Models

In contrast to simply identifying the reference models to use, implementing them may take some time, depending on whether the vendor has an off-the-shelf solution for that standard, and if not, the scale and complexity of the standard.

Usually implementing a standard such as BIAN or SCOR would take several days at least. Factors that will affect this are the number of entities, the number of relations between them, and the number of data items between them. An additional exercise will likely be necessary to map the standards to each other. For example, BPMN has a reasonable mapping to ArchiMate; mapping SCOR will take rather more time.

Define Modeling and Branding Standards

This is another relatively small item, albeit a necessary one. The modeling standards that exist are small in number – BPMN for process, TOGAF or ArchiMate for architecture. The complexity come in identifying how these need to be tailored; for example, how to track risks and controls against BPMN processes? How to map capabilities if using ArchiMate?

Clean up legacy data

This is another highly variable work item, as it will depend on the number of existing models, their complexity, and their precision. Often the largest effort involved comes in mapping existing objects to the metamodel that you have adopted. Does this rectangle represent a server or an application? Or an application function? Perhaps a subject matter expert will be available to advise you... now you just have to locate them and get them to spend time with you. Other times, you have to spend the time investigating the question yourself.

The clean-up and import of existing legacy data is a highly variable item, and the second of three risk factors for the implementation.

Data definition

Defining what attributes, and what data, the tool should track is an area that can take up more time than it should when rolling out a tool. On the one hand, there is a temptation to record everything. But on the other hand, modelers work under time constraints, and if they are confronted with 30 fields to fill in for each item, they likely won’t do it…or if they do, they’ll do some. Which ones? Up to them.

Some advocate the use of mandatory fields to address this, but in my experience people find workarounds. A classic example is the process definition project where the tool required each process to have a process owner before the process could be saved. So, the modelers simply listed the CIO as the owner of every single process with the intention, never fulfilled, of going back and putting the real owner in later.

The rule of thumb that I advise on defining which fields to record is: why do you care about a field? If that information won’t realistically drive a decision or be used to filter items, don’t bother recording it. It can also be added and recorded later…

Create Data Feeds

Creation of the data feeds in from, and out to, other repositories, is the third of the three main risk items. This should not be surprising; implementing the data feed is effectively an extract/transform/load (ETL) project in itself. There are two main factors to consider.

First of all, and most significant, is identifying what information another repository can make available, and in what format. Often this will require the time of a subject matter expert for that repository, who will often be overbooked and hard to get time from.

Second, tools generally have formats they can export to, and formats they can import from… but it is rare that two tools use exactly the same format. So some kind of transformation is necessary; and this may entail using a resource who has specific, specialized skills. Consequently, availability of this resource becomes a consideration.

Define Reports and Dashboards

The reports and dashboards are an important component of any tool, as they are what is presenting the public face of the data within the repository. To a large extent, this area is the key reason for implementing a modeling tool in the first place.

I’ve often found that report dashboard definition can descend into a quagmire of endless discussions. The best approach in such cases is to demand, for each dashboard or each report, a simple question that expresses the purpose of the report. In other words, what question does this report or dashboard answer? Some examples –

  • “How much are we spending on hardware, broken out by department?”
  • “What are the most used application interfaces that we have?”
  • “What are the processes with the largest number of risks against them?”

The amount of time for each report, each dashboard, will vary based on the tool in question. A rule of thumb is to estimate half a day for the design and implementation of a given dashboard that answers a single question.

Tailor Training

Tailoring the training will involve two key aspects:

  • Creating training data that is relevant to the roles of the training groups,
  • Selecting the tasks that the trainees will be engaging in.

As a rule of thumb, I budget half a day to tailor tool user training for a given group.

Deliver Training

Delivery of the training will vary based on attendees and scope, and here you need to rely on the tool vendor to advise you. When training on iServer, I prefer to target a class size of around 5 people per session; user training taking half a day and system administrator training taking a day.

Execute Communication Plan

Executing the communication plan will vary based on the number and scope of the audiences. A good approach is to schedule one session for each team, composed of a senior stakeholder and their key team members, each session taking a maximum of one hour.

In summary then, the key items to be wary of are: agreeing the governance model, import of legacy, and data defining data feeds. In an ideal world, it would be possible to offer a formula to estimate each of the work items; unfortunately, this could only come at the expense of making the structure too specific.