<img alt="" src="https://secure.visionarycloudvision.com/780791.png" style="display:none;">

Top Tips for High Value Testing Management Information

by Alan Campbell, on Apr 16, 2020 1:13:48 PM

 

0.0-TestingServicesOne of the most important tasks in a Test Manager’s role is to ensure that senior stakeholders  receive the most relevant Management Information (MI) on a regular basis throughout the course of the test execution phases on a project or programme. By ‘most relevant’, we mean the information which informs the stakeholders so they can accurately and confidently plot the next moves on the programme. That could be:
 
  • Maintaining a steady course as progress is tracking in-line with the baselined plan, or
  • Making an intervention to get the programme back on track

Our take on the most relevant MI, based on the experience of working on many projects and programmes is outlined below. However, before we look at that we must first consider when should you be aiming to agree the type of MI that you will present to stakeholders.

It is commonplace for Test Managers not to consider the MI  that they will present until testing commences, or at the very least until they are part way through test preparation. In our view this is far too late.

We believe that you should aim to agree the MI before the first test script is written. This provides two key benefits:

  1. It allows you to get testing on the agenda for discussion with the senior stakeholders in the early stages of the programme. This is always good practice.
  2. Also and more importantly, the MI that you produce will dictate the structure of your test script repository as all of the information required to produce the MI must be included in the test scripts.

For example, if you want to break test progress down into Business Workstreams, then you should ensure that this is included when initially creating your test script repository and all tests scripts should be assigned to a Business Workstream.

In the same vein, if you want to show test execution progress against specific modules of code or functionality within a system, then this should also be included in the design of your test script repository and in your test scripts.

Agenor_Icon_02TOP TIP: agreeing the MI upfront will ensure that you can generate the information required at the touch of a button without having to rework either your test script repository or your test scripts

Types of Test MI

Below we’ve detailed different types of Test MI covering both Test Execution and Defects. From our own experience these all provide a slightly different slant on progress and are important when presenting information back to a stakeholder community.

In all cases the MI would show the current day/week, depending on how frequently the MI is issued, and a cumulative graph comparing a roll-up of the actual numbers versus the current baselined forecast.

Agenor_Icon_02TOP TIP: Be careful how much information you provide. Too much information will confuse rather than inform, also known as ‘paralysis by analysis’. Decision making then becomes slower - or even worse - it stops altogether due to the sheer volume of information. If you search for long enough you will undoubtedly uncover information which is contradictory or ambiguous and you want to avoid this at all costs.

1. Test Execution MI

  • Test Execution Throughput: – the most basic of statistics, this simply shows whether you are tracking to plan or not.
  • Pass v’s Fail: – pass v’s fail introduces the aspect of quality. Ideally you should have already agreed an indicative pass rate with your stakeholders prior to commencing test execution. This pass rate will depend on a number of factors for example:
    • Are you testing new or existing code?
    • Has the system been configured for you? If so has the configuration been used/tested before or is it new to your project/programme? If it is a new configuration then you should treat it with the same respect/suspicion as you would if it was new code.
    • Is the system COTS (commercial off the shelf software with no change)?
    • Has the code/system been tested in earlier project or programme phases?

All of these factors and more will help to build a picture of how stable and robust the system is, which will then feed into the expected Pass Rate. You should then report your actual Pass Rate against the forecast Pass Rate.

Note – it’s very common for the Pass Rate to dip a bit at the beginning of the Test Phase then slowly rise as the system functionality becomes more stable and reliable.

  • Descoped: – this records the tests which you had previously written but are now not intending to execute. An important input here is to understand why you are no longer intending to execute these tests. For example, has the design changed or has functionality been delayed to a later phase? If so, what will this mean for future test execution?
  • Business Accepted: – these are tests which did not pass and for which a fix was not requested – instead the business team have accepted the deviation from requirements and will absorb this into the working system.

Agenor_Icon_02TOP TIP: It is important to keep a count of the instances of Business Acceptance as for each of these tests, a downstream impact may occur in the Live Operations team – be that amending already written training documentation or increasing the number of Live Operations staff to cater for a business workaround. All of these examples would potentially present an ongoing cost to the business and thus should be tracked and reported on.

  • Blocked: – When is a test script blocked versus when should the test be set to 'failed'? We have worked on many programmes where this question has been posed. The basic premise we have used is that if you cannot execute the Test Script due to an existing defect then you set the test to Blocked. There are however many people who would argue that the Test Script should simply be set to 'failed' and logged against the existing defect. There is no right or wrong in this instance, just personal preference.

Agenor_Icon_02TOP TIP: Another use of Blocked is if you cannot execute the Test Script because a piece of functionality has not yet been delivered – this perhaps works better as you’d be less likely to fail the test if the functionality was  due to be delivered in a forthcoming Change Request for example.

2. Defects MI

  • Defects Raised and Closed per week: – while a very common statistic, this provides a good indication of both the quality of the software under test and also whether the defect investigation and resolution teams are sized correctly to deal with the flow of issues.
  • Open Defects: – this is an important statistic to report on and compare week on week. Inevitably when you first commence testing the number of Open Defects will rise (often sharply) as the Test Teams are let loose on the software. The number of Open Defects may continue to rise as Test Scripts are executed for the first time and the whole system is exercised under test. A sustained decrease indicates that you have ‘broken the back of the defects’ in the system. At this point you may consider reallocating some of the defect investigating and fixing resources to bolster the Implementation teams.
TOP TIP: TAgenor_Icon_02he key trend to look out for is when does the number of Open Defects start to slow, and then when does this number decrease.

  • Defect Age: – the defect age typically reports on the length of time that Defects have been ‘open’ for. While another common statistic, this one comes with a ‘health warning’. In our experience there are many valid reasons for defects remaining open for a relatively long period of time after the resolution is first identified. For this reason, we typically only report Defect Age for the highest priority defects which by their very nature should be fixed in the shortest possible time.

Agenor_Icon_02TOP TIP: Compiling this data for lower priority defects will create misleading Test MI.

  • Defect SLA’s (Service Level Agreements): - defect fixing SLA’s while common are often very difficult to police and invariably cause issues when agreeing contracts with Software Development Teams or Providers. Some defects will naturally be more complex and thus take longer to fix than others.

Agenor_Icon_02TOP TIP: A more appropriate SLA to report on is the time taken to commence investigating a defect. This metric will either highlight a suitably resourced and efficient defect investigation team or shine a light on an area which was under resourced and/or inefficient.

Should you revise your Forecast to Reflect the Actual Progress? 

Reality Check Ahead road sign with sun background

For all of the above, once testing is underway, you must decide whether to amend the remaining plan to reflect and align with the actual progress to date. This is a key intervention which should be undertaken, even if it means radically changing your initial plan. In many cases, history will provide an accurate insight into future progress and you stand the best chance of forecasting an accurate plan and end date if you consider the progress to date and reflect this in a revised forecast.

The timing of this intervention is important and will in part depend on the length of the Test Cycle or test phase that you are managing:

  • On typical Waterfall Programmes with more lengthy test cycles we would tend not to compare the forecast and actuals until at least four weeks of actual data was available and ideally six weeks. This is usually long enough to derive a sustainable pattern or rhythm which could be maintained as you progress forward in the timeline.
  • On Agile Programmes where test cycles/sprints are shorter you may be forced to address this much sooner within the cycle/sprint, or you may simply opt to reflect the run rate from the previous cycle/sprint in the next iteration.

Conclusions

Outcome 1Regardless of the project delivery methodology or framework being adopted, testing will almost always happen directly before an implementation.

As a result, the desire to understand when this phase is likely to complete and the requirement for reliable and accurate MI to back up that claim, becomes more in focus than in any other phase of a project. Completing testing provides a ‘certainty of delivery’ that is not available at any other time of the project and means that costly and cherished resources can be freed up to work on other assignments.

To meet the challenges associated with delivering high value Testing Management Information, it is very important to agree in advance with your Senior Stakeholders the MI that they need. It is key that the MI they receive is timely, reliable, insightful and accurate because we, as Test Managers, stand or fall on the quality of the output we provide.

At Agenor Technology,  our experience clearly shows that by adopting some or all of the test execution and defects types of MI outlined above, we significantly improve both the quality and volume of the Testing Management Information we deliver for our clients.

If you are facing challenges with your Testing MI and would like to benefit from the skills and expertise of Agenor Technology, please get in touch today.

 

 

Topics:Stakeholder ManagementTesting Servicesconsultancyoutcomebasedtimeandmaterials

Comments

About Agenor Blog

Welcome to the Agenor blog, where you can stay up to date with the latest Agenor activities, news and content. Don't forget to have your say and join the conversation! 

Subscribe to Updates