In any Testimony playback, even into an identical system with no changes, some script failures are inevitable. These might be caused by environmental differences between the recording and playback systems; by limitations in the wider playback environment (e.g., some organisations choose not to install MS Office on the bot VMs); by slight differences in execution timings; or by certain aspects of the ways that SAP and Testimony work.

These failures during a playback constitute “noise” which can mask the genuine, regression defects that Testimony is looking for.

With a double playback, we seek to strip out this noise, therefore making the job of identifying genuine regression defects much easier.

Reasons for defects during first or second playbacks

Even where a recording system and a playback system are identical (i.e., no release / upgrade / patching changes have been deployed to the target system before starting the playback), you can always expect to see some defects (differences between the output from the recording and the output from the playback) because of fundamental ways in which SAP and Testimony work. This section highlights some common reasons for seeing playback defects.

In addition to these, the Testimony Testers’ Guide explains different types of defects, some of which are caused by factors that do not point to genuine regression failures.

User logons not captured


  • User1 logs on at 09:00
  • Recording is switched on at 09:30
  • At 10:00 User1 creates sales order 1234
  • At 10:30 User2 logs on
  • At 11:00 User2 changes sales order 1234
  • At 11:30 User3 logs on
  • At 12:00 User3 displays sales order 1234

In this scenario, because User1 logged on before the recording was switched on, Testimony didn’t capture their logon. By default, Testimony will discard any activity for a user without a logon, so the creation of sales order ABC1 is not played back. Because of this, the activities of User2 & User3 will fail, because the sales order they are trying to access doesn’t exist.

There is an option in Testimony to create logon scripts for users where we didn’t capture the actual logon. However, when this is switched on we still discard any activity in the user session (i.e., the particular SAPGUI window) for which we didn’t have a logon . (This is so that, for example, we don’t try to play back a transaction from the middle of the screen logic.) So in our scenario above, sales order ABC1 wouldn’t have been created (and hence the subsequent transactions would have failed) because User1 was working in one session. However if, after the start of the recording, User1 had opened another SAPGUI session and then created ABC1, the other two transactions would have worked.

This scenario would result in failures during the first playback, but these would also arise in the second playback and so would be filtered out by the defect proposal run after the second playback, since the failures would be for exactly the same reasons. We therefore know that these failures do not indicate genuine regression defects.


Scenario 1:

  • User1 goes to change sales order 1234
  • A few seconds later, User2 also tries to change sales order 1234, and receives a “Sales Order is locked” message
  • A few seconds later, User1 finishes their change and saves the sales order
  • A few seconds later, User2 tries to change the sales order again and this time is able to

Because locking is so transient (with each lock often lasting a few seconds or less), it’s possible that during the playback User2’s change is executed after the lock on the sales order has already been released. In this case, User2 won’t receive the “Sales Order is locked” message. This will lead to a failure (Different Message) in the script.

The converse is also true.

Scenario 2:
User3 goes to change sales order 1234
A minute later, User4 changes sales order 1234. Because User3 has finished the change, there is no lock, so User4’s change proceeds as normal

During the playback, it’s possible that User4’s change is executed while User3’s change is still running. This will lead User4 to receive a “Sales Order is locked” message, which will cause their script to fail.
Because locks are so transient and are dependent on the timing and sequencing of calls, it is possible that lock errors might occur in the first playback, but not in the second; or in the second playback but not in the first. (Where a locking error occurs in both playbacks, this will be filtered out by the Double Playback functionality.)

Timing / Sequencing

Scenario1 :

  • Batch Job A updates stock levels in a warehouse. It starts at 15:00 and finishes at 16:00
  • At 16:30 User1 creates an order which checks the stock level. There is enough stock of the material, so the order is created

During the playback it’s possible that User1’s order creation runs before Batch Job A starts (or while the job is still running). In this case, it’s therefore possible that when they do the stock check there isn’t enough stock for the order so an error message appears, causing the script to fail.

It is most likely that an error of this type seen in the first playback will also be seen in the second and will therefore be filtered out by Double Playback. However, it is still possible that timing issues may occur in only one of the playbacks.

Scenario 2:

  • A series of 25 user transactions executes between 15:00 & 16:00, each updating the statuses of various orders to “ready for delivery”
  • A batch job starts at 16:15 to process orders marked as ready for delivery. For each order found, a message is output to the job log.

This scenario can be considered as the obverse of the previous scenario. In this instance, a batch job is processing updates made by several dialog transactions. During the first playback, the batch starts before any order status updates have been made. This results in a “no orders to process” message in the job log which, because this was not seen in the recording, results in the batch job being flagged as a failure. If, during the second playback, the batch job again starts before any orders had been updated, we would again see this failure and this would be filtered out by the Double Playback functionality.

Regression failures causing non-regression failures

During a second playback in the Double Playback scenario, it is possible that a genuine regression failure in one script can lead to non-regression failures in other scripts.


  • Batch Job A updates stock levels in a warehouse. It starts at 15:00 and finishes at 16:00
  • At 16:30 User1 creates an order which checks the stock level. There is enough stock of the material, so the order is created

This is the same scenario as above, but in this case during the first playback both the batch job and User1’s transaction were executed in the correct order and completed successfully. However, during the second playback a regression error has been introduced in Batch Job A causing it to fail without updating the stock levels. This results in a failure of User1’s transaction, even though the transaction itself has no regression error.


Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment