Global Performance Analysis

Use

Today, software, system landscapes and software production processes are becoming increasingly complex with business scenarios spanning across several systems. Manual performance testing of individual programs or transactions using standard test and analysis tools, such as Performance Trace (ST05), ABAP Runtime Analysis (SE30), and Business Transaction Analysis (STAD), is therefore no longer sufficient, because performance often depends on the interaction of individual components in various systems.

The Global Performance Analysis (transaction ST30, component BC-TWB-TST-P-GPA) eliminates the need for manual tests, which are costly in terms of manpower and time, by enabling you to carry out the following tasks:

?      Test performance across system boundaries for ABAP and non-ABAP components

?      Deliver performance results of all application and system components involved in a specific business process

?      Automate performance analyses, for example, for regression or scalability tests

?      Detect performance degradation during the entire development cycle

ST30 is not designed for stress testing. For this purpose, benchmarking tools are used (benchmarking = load simulation of many users).

Integration

The performance tests that ST30 collects from the systems under test are stored in a central database. There they can be accessed at any time for analysis and can serve as a basis for repeated comparison and statistical evaluation (scalability or regression tests, etc.).

ST30 retrieves the performance figures generated from test runs by:

?      Using statistical records (compare transactions STAD, ST03N, ST03G, etc.) as well as data generated by the Performance Analysis (transaction ST05)

?      Integrating the functions of the Code Inspector (transaction SCI)

You can access the performance results in the central database using transaction ST33, or directly from within ST30.

Features

The main functions provided by ST30 are:

?      Automated performance testing

?      Displaying the performance test results

ST30 also allows you to conduct manual performance tests if required, but its main focus lies on automated testing.

Running automated performance tests using ST30 means that you can start an individual business process or scenario automatically and repeatedly, involving all the relevant components in the system landscape. The tool collects all the data that is relevant from a performance point of view from all systems involved and stores it centrally in a database.

The performance tests are based on the use of eCATT ( extended Computer-Aided Test Tool, transaction (SECATT)). Although mainly intended for functional testing, eCATT test configurations are also indispensable for automatic testing using ST30. They are based on test scripts that contain all the steps that were carried out in the business scenario to be tested. Test scripts are usually recorded or written manually and can be edited at any time.

Following are some of the main properties of ST30:

?      The performance tests executed with ST30 are initiated on a central test system and are dynamic (performed during runtime) – as opposed to static checks (not performed during runtime) performed, for example, by the Code Inspector on program coding.

?      As the system components involved in a test can have different release versions and thus may have different representations of performance figures, ST30 always displays statistical data in a uniform unit of measurement (UOM), irrespective of which UOM is used in the tested system component. For example, the UOM for memory consumption is always given in MB.

?      64-bit support of a system is automatically identified and taken into account.

?      Time differences between systems that are located in different time zones (for example USA and Europe) are automatically taken into account. This means that you do not have to convert the values.

Summary of Steps Performed

Following is a brief overview of the steps performed during automated performance testing (this is based on the assumption that the scenario to be tested has already been recorded in an eCATT test script):

       1.      ST30 invokes the eCATT on the central test system.

       2.      The eCATT test script runs the scenario involving all components (programs, transactions) in the system landscape.

       3.      ST30 retrieves the performance figures from all the system components used.

       4.      ST30 stores the collected performance figures in the central database.

See also:

Conducting Automated Performance Tests

Conducting Manual Performance Tests

Displaying the Performance Test Results