The bot is an executable program that resides on a windows machine (normally a virtual machine). During playback the bot machines log on as the recorded users and execute the recorded transactions. The requirements for bot setup can be found here.
This is the primary SAP system in which Testimony is installed and is operated. Testimony users log on to the Central system to configure Testimony, create test plans, start recordings and playbacks and analyse the results.
These are run manually before a recording or playback to validate the environment is ready to perform those functions. The Testimony Administrator should run the check steps and review the results prior to performing a recording or playback.
Allows the comparison of recorded data from the execution queue with the usage data. It provides high-level statistics (e.g., what percentage of critical priority dialog transactions you recorded) as well as detailed information on each dialog transaction, batch job, etc.
These are used to link scripts which use the same data, for example a purchase order number. If the creation of a purchase order fails during the playback then Testimony recognises that there is no point running a subsequent script that approves this purchase order. Testimony will therefore cancel the execution of the order approval script. Testimony will also recognize if a different order number is generated during playback and will adjust subsequent scripts to use this new number rather than the recorded number.
To record and playback Testimony has enhancements on the source or target system to enable the recording or playback to operate correctly. These are switched on before recording or playback and are automatically deactivated at the end of the recording or playback in the “Post-Processing Steps”. Should a recording or playback be stopped unexpectedly or due to a technical error, the Testimony Administrator should manually deactivate the enhancements.
The execution queue is built when scripts are added from the repository and contains the scripts to be played back. Logic is built into the “Add to Execution Queue” process that idnetifies and establishes linkages between related scripts as the execution queue is being built.
A filtered recording is used when you want to record a subset of users, transactions, objects, or transaction types rather than all activity on the source system. It is typically used for testing purposes to ensure that the setup from central to source system has been completed correctly.
Filter sets have two main uses: to exclude certain objects (transactions, batch jobs, etc.) from a recording; and to provide special handling of error cases during a playback. For example, if you want Testimony to ignore all occurrences of transaction SM21 from the recording, then adding this transaction to the recording filter set will achieve this. If you want to ignore occurrences of message E123 from a particular screen, you can set this message as an exclusion in the comparison filter set. Filter sets can also be defined for the transfer to repository (most commonly for setting up transaction sampling) and for the transfer to the execution queue, although this is less frequently used. This topic should be further studied via the Filter Sets section here.
Testimony records activity deeper than just the UI so that objects such as change documents and number ranges are also observed and recorded. These objects are used to create relationships, or linkages, between scripts so that dependencies can be enforced and validated. These can then be checked at playback and during results analysis to ensure that these match, providing a deeper level of testing.
Testimony can be configured to send out notifications when certain actions are executed or to provide regular updates on ongoing actions. Notifications are managed through the notification setup in the configuration tray.
The playback is the execution of the scripts in the execution queue, via the bots, on the target system. The playback executes the scripted activity and generates the test results for comparison and analysis.
These are run automatically after a recording or playback is completed. Any errors in post-processing will cause a hard stop preventing the status from moving to complete. If errors are found the operator should investigate these errors to determine if they need to be manually resolved.
These are run automatically before a recording or playback starts. Any errors will cause a hard stop preventing the recording or playback from starting. Errors should be resolved before attempting to restart the recording or playback.
A recording (either Filtered or Standard) is the process by which actions on the source system are captured by Testimony.
The repository is a staging post for recorded transactions. Once all recorded transactions have been stored in the Central System, they are transferred to the repository (potentially with some filtering) before being transferred to the execution queue for playback. The repository is where any manipulation or deletion of sessions should take place, since if a mistake is made the sessions can be restored by transferring them again from the recorded data.
Sampling is part of Filter Set functionality and is only set at the “Transfer to Repository” stage. Sampling allows the operator to decrease the volume of a set of activities without negatively impacting the validity of the test results. Since scripts will often modify data that will be used by later scripts, sampling is designed to be used for display or read-only activities that do not manipulate data. The idea behind it is as follows:
If you recorded a read-only process that runs every 5 minutes for the duration of the recording, you can play back a small percentage of those processes to help reduce playback times while still testing that process. Sampling can be used for Dialog, Batch and RFC processes.
Testimony utilises the shared memory of the source system to save the recorded data temporarily before writing it to the database. This is so that the source system does not see a significant increase in I/O activity during a recording. To prevent any negative impact on source system performance Testimony will stop the recording if it runs too low on available shared memory. The recommended settings for the shared memory parameters are here.
This is the system that is recorded and therefore acts as the source for the recording. In BAU operation of Testimony, this is usually the production system.
A standard recording records all activities, excluding any defined exceptions in the “Filter Sets”.
This is the regression test system into which recorded scripts are played back via the Bots. It is recommended that the Target system is dedicated for use with Testimony and is refreshed with a point-in-time backup of the Source system taken as of the start of the recording.
A test plan is the logical container for the recording, playback, and results of a test scenario. When setting up a test plan the operator will define a Source system, Target system, system mapping and authorizations for users. To simplify the test plan creation process, test plans can be copied for scenarios using the same Source and Target systems.