October 2012
Mon Tue Wed Thu Fri Sat Sun
1234567
891011121314
15161718192021
22232425262728
293031  

Month October 2012

One metric you should consider for showing how quality pays for itself

Processes improvement initiatives everywhere and not a single measurement to explain them.

How many slide decks have you seen where the cost savings shown seem more hypothetical and unrealistic than reality? I got into the habit of creating a folder in my mail inbox labeled ‘PIFMA’ which stands for ‘pulled it from my a$$’.

My rule of thumb is, if you can’t carry it forward into a working model using a standard measurement checklist (the result of answering a series of measurement review questions) then you’re either guessing, or you carry a lot of implicit knowledge and believe you are an SME, aka the informed guesser, or perhaps,  you carry a lot of weight and no one challenges your numbers out of fear.

I prefer to start with good old fashioned heuristics and clear and concise KPI’s that have been baselined by them. Does it take more would than whipping up a bunch of fluff numbers? Yes! Unlike fluff numbers, is it more reliable and capable of building an economically fundable model that can withstand the test of time, scope and quality? Yes!

Here’s one metric that I have used when creating executive slide decks for CIO’s, CTO’s and COO’s to secure funding and then putting into practical use with a team to be measured by.

Defect removal costs per function point

Let’s say, I wanted to make sure a team was focused on the defect removal costs per function point and not the cost per defect metric. My conversations central theme is that I want to prove that with quality built into a product early I can substantially lower the cost per defect. How would I talk a team through this?

Assume, all defect removal operations have a significant quantity of fixed costs associated with them. It follows that, as the number of defects reported declines, the cost per defect must rise. It is reasonable to argue that cost per defect is not valid for serious economic analysis because every downstream activity will have a higher cost per defect than upstream activities. Therefore, it’s important that discussion around measurement stays focused on the right metric.

Example:

I have a software application that contains 100 function points. During each of my quality assurance processes the software will go through three consecutive test stages, each of which will test 50 percent of the function points. Writing the test cases for each test stage costs $1,000. Running the tests for each stage costs $1,000. Fixing each discovered defect costs $100. What is the economics of the three test stages?

In the first test stage, the costs were $1,000 for writing test cases, $1,000 for running test cases, and $5,000 for fixing 50 defects, or $7,000 in all. The “Defect removal costs per function point” for test stage 1 would be $70. Consider, there were 50 function points tested out of 100 total function points. This amounts to $140 per test.

In the second test stage, the costs were $1,000 for writing test cases, $1,000 for running test cases, and $2,500 for fixing 25 defects, or $4,500 in all. The “Defect removal costs per function point” for test stage 2 would be $45. Considering there are now 25 function points tested out of the remaining 50 the cost is $180 per test.

In the third test stage, the costs were $1,000 for writing test cases, $1,000 for running test cases, and $1,200 for fixing 12 defects, or $3,200 in all. The “Defect removal costs per function point” for test stage 3 would be $32. With only 12 tests remaining the cost jumps to $267 per test.

As can be seen from this example, the fixed costs of writing and running test cases tend to drive up the later per test costs at each stage even where few defects are found. However the cost of defect removal per function point decreases.

That is a metric you should consider for measurement. Granted, it’s only one – there are and can be many more – but, this is where you can start the conversation that drives home the main point of your discussions central theme.

Note: One could argue that defect cost does not rise because resource costs are fixed. Therefore, defect removal costs remain the same no matter which environment or stage the defect was found in. Considering how many more resources are involved in the removal of a defect as it migrates through your ecosystem and not forgetting the potentially irreparable damage caused by defects found by your customers it’s not as easy to dismiss the metric I am proposing. Granted, you should always be rigorous in your metric assumptions, consider other models, calculations and provide real examples that apply to your projections. The design of your experiment must be suited to your situation.

Review, Approve, Verify – Part 2 – Right Approach, Applied Correctly

It’s been going on since the first tool was made. Since the first process was created companies have been trying on catchy names in the name of improving what is, into what could be better. Today, a corporation attempting to achieve qualitative and quantitative change to either make themselves more lean, shorten their sales cycle is a sure way to add to profits. How do you know if your are succeeding in changing organizational culture or behavior, lowering the cost of doing business, eliminating duplication of common services, or even creating new markets from current intellectual property, or patents or products without the right people, processes and tools? There have been numerous books written on re-engineering the corporation, transformational leadership and change. Enter the words ‘transforming an organization’ into an advanced Google search and thousands of  will pop up. Imagine, for moment that each book contains a unique aspect on the approach and when it’s appropriate to apply it. That would imply that we have thousands of  ‘right approaches’ and we need to know which one is the right one to apply in each situation when transforming an organization.

Transformation is a way to qualitative and quantitative change what’s not, or could be working better for a company, and companies are clearly looking for guidance from professionals who can help them. Rarely, do publishers publish what does not sell.

So, how does any of this relate to review, approve, and verify? Those three words put into practice have the ability to transform your organization.

There may be some reading this that believe there is a ‘one size fits all’ approach to delivery quality processes. Solving problems is as simple as creating one, simple flow chart, prescribing it to a plethora of challenges and the rest will solve itself. In the long run leadership at companies looking to transform need to ask themselves if they would prefer a doctor who prescribes for them the same medicine dispensed to all patients without  fully understanding or offering professional guidance regarding not only what ails them, but what caused what ails them and how to get on a road to recovery?

There are four quadrants that you should be conscious of when choosing a quality process and deciding how to go about implementing it shown here:

Right Approach, Applied Incorrectly(Message is on key people ‘get’ what the process is, it’s the delivery mechanism/forum that is off-base). Your processes and leadership (a person or team) is in place, however; a groupthink mindset due to a lack of sustained momentum and low quantitative and qualitative data exists. This is a ‘I understand it, we do things differently here or, I just do what I am told’ mentality. Right Approach, Applied Correctly(The right message, delivered the right way)Processes and leadership with the ability to sustain momentum has encouraged and mentored a framework that nurtures and sustains qualitative and quantitative data that is vital to building a continuously aware, transformational organization.
Wrong Approach, Applied Incorrectly( Message is incorrect or, makes the process appear more opaque than it already was and the delivery mechanism/ forum are both inappropriate)It’s an “every man for himself” attitude coming from most team members nearly all the time. Data that is available is meaningless because the processes that allow for the collection of data is flawed, transformation is unthinkable. This is the “Why should I care about this?” Or,  “how is this connected to my work?” Wrong Approach, Applied Correctly(The message is incorrect, addresses partial goals, and has adverse effects even though the delivery mechanism/forum was appropriate)There is no sustained leadership. Sadly, the teams’ awareness of the ever changing approach and application with each new manager’s spin on following ritual will merely reaffirm their attitude to slog through the project more or less, to stay in sync or, out of trouble with the middle managers goals while losing sight of overall quality. This sustains mediocrity.

In the next post I’ll talk about metrics that matter.