Purpose

Once the playback process has completed (either partially or fully), you are then able to start reviewing the results of the output.

Time Required

Dependent upon number of failures that have occurred as a part of the playback.

Audience / Users

Business Users, SME’s, Business Analysts, Test Team

Pre-requisites

It is recommended you wait until the playback has completed fully. However, you are able to review the results whilst the playback is still on-going.

Some background

When Testimony records the live production system(s), it captures “inputs”, “outputs” and certain “linkages” that occur in-between. When the playback is repeated into the test system(s), Testimony will automatically check that the actual output matches the expected output that occurred in the live system. If there are linkages that were recorded, it will compare these as well.

Process Steps

The task at this stage is to review the results and decide whether there are any fundamental regression issues. If there are, then defects should be raised against these for resolution. Once enough critical defects are resolved, it is possible to re-run the playback against a restored version of the test system. This restored system should also then have the defect resolutions (typically transport requests) applied to them prior to re-running the playback process.

To begin, navigate to the “Results” drawer in Testimony then select the “Result Overview”. Double click the execution queue that you want to review. Then navigate to the tab “Failure Overview”.

The Failure Overview breaks down the results of the playback by the following:

(1) Application component – This is the functional area. The lowest level is displayed first and then the top level application component next. This is determined by the application component that the object that has failed belongs to. For example, VA01 belongs to Sales and Distribution.

(2) Criticality – The criticality of the failure depends upon the “Coverage Analysis” functionality in Testimony. To get the criticality, you must have executed the automated criticality determination first. This decides criticality based upon how often a given interaction is used, but this can then be over-ridden by a library which can dictate that less often used transactions can have their criticality increased.

(3) Reason for Failure – The reason for failure will be determined by the type of interaction. So dialog transactions have one set of failure reasons and RFC’s have a different set. More information about a failure reason is available within the “Failure Overview” transaction by selecting the “Information” icon.

You should start by focusing in on your functional area that is relevant to you. You can then expand a particular area out and review the failures in that area by expanding the area and reviewing the reasons for failure. Start with the Critical failures and then work your way to the right by High, Medium and Low. When you click on a count (of failures), the screen will popup with the list of failures in that area.

You should then review these failures by checking the reason for failure and the comparing what was expected with what actually happened. This is available by clicking the hot-spot icon with column “Act” (Actual results). A popup window will appear with what was expected on the left and the actual results on the right. Depending upon the type of interaction (dialog transaction, RFC, batch job) you’ll see different expected versus actual results here.

It is currently not possible to automatically generate defects from the failures, nor manually create one. You should be creating defects in your existing ITSM solution until defect management functionality in Testimony becomes available (scheduled for Q1 2018).

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment