Running eCATT Test Configurations From Within ST30

Setting the Run Parameters

       1.      In transaction ST30, choose the eCATT Test tab.

       2.      Enter a log ID in the Log ID field.

The purpose of a log ID is to group related or similar tests under a common node, so that later on the tests can be found and displayed together in a list. You must specify an existing log ID or create a new one and save it (this requires you to specify a transport order because the log ID entries can be transported to another system). To create a new log ID, choose Edit Log IDs.

       3.      Enter a description of the data in the Performance Test field to be able to identify the performance figures in transaction ST33 later.

 

For example, you could use the following syntax: <name of test configuration> + <further information> + <current date>

       4.      Specify the eCATT test configuration in the Test Configuration field.

You can also specify an evaluation schema for the performance figures. An evaluation schema defines which of the data records in the in the global performance analysis statistics are used for the evaluation.

       5.      For reliable measurement results, proceed as follows:

                            a.      Enter the number of eCATT preprocesses in the No. of eCATT Preprocs field. Specify at least 5 (recommended) to fill the system buffers.

eCATT preprocesses precede the main runs from which the performance figures are retrieved. They set the resources used for the run (program buffer, table buffer, and so on) to a well-defined status, before the performance of the subsequent eCATT runs is measured.

                            b.      Set the No. of eCATT Main Runs to at least 10 (recommended).

These are the runs whose performance behavior is to be measured. They are executed after the eCATT preprocesses.

 

If you activate With Run for SQL Trace, one additional run of the test configuration is executed to create an SQL trace. The purpose of this is to avoid that the SQL trace influences the measurement in the main run.

Programming Guidelines (Optional)

If you activate Checklist, the program generates a list of performance checklist results to be used with an external spreadsheet application, such as Microsoft Excel. The creation of the checklist is only possible if SQL Trace Analysis and therefore also With Run for SQL Trace were also set, because some results of the checklist are based on the analysis of the SQL trace.

SQL Trace Analysis

To be able to perform this analysis, an SQL trace must have been created.

If you activate this indicator, the Code Inspector checks the coding during the performance test. The purpose of this function is to identify inefficient database accesses that may arise from dynamic SQL statements in the program that ran before.

The reason for this is that the checklist results for "Indexes", "WHERE statements" and "Buffers" are derived from the automatic SQL trace analysis (executed by the Code Inspector).

 

You can only activate the SQL trace analysis, if you also choose With Run for SQL Trace, since otherwise there is no SQL trace available to determine the programs involved.

Static Program Analysis

This indicator determines whether the performance checks of the Code Inspector are used with the programs involved during the performance test (eCATT). The purpose of this function is to identify inefficient database accesses in the static coding.

You can only activate the static program analysis if you also choose With Run for SQL Trace, since otherwise there is no SQL trace available to determine the programs involved.

Distributed Statistics Data (DSRs) (Optional)

In the Central System Destination field you can specify the destination of the central monitoring system in the system landscape.

In the central monitoring system, for example, non-ABAP system components (such as J2EE components) are registered. The statistical data for these system components can only be collected if this destination (usually the central system that is used for monitoring the system landscape) is used. If you do not make a specification, ST30 considers the current system as the central monitoring system.

The destination entered here is only taken into account if With Distributed Statistical Data was also set. This indicator determines whether additional distributed statistics records (DSRs) are to be collected from the system components (such as J2EE components) known to the central monitoring system (accessible via the specified destination). In this case, all components that are registered in the central monitoring system are checked for statistics data that may have accumulated.

?      With/Without Transactional Context

This indicator determines whether the system searches for distributed statistics data (DSRs) without the transaction context of the business process executed.

?      Additional Statistics Data

You define here whether, in addition to the DSRs automatically collected by the specified central monitoring system, additional statistics data is to be collected from destinations entered manually on the Manual Test 1 or Manual Test 2 tabs before the test run. For more details, see the F1 Help.

Data Comparison

In this group box you can perform comparisons between the performance figures of two different tests. Enter the names of the performance tests to be compared and specify the values as required. For a description of each field, see the F1 Help.

Test Control

Choose eCATT Test Only to start the measurement runs. The system executes the test runs in online mode.

Result

After the runs have completed, check for possible error messages on the Automatic Test: Logb tab.

If the log identifies error messages that occurred during the run of a test configuration itself:

       1.      Start transaction SECATT.

       2.      Choose Goto  ®  Logs.

       3.      Specify the procedure number of the eCATT log in the Current Procedure No. field. You can obtain the procedure number on the Autom. Test: Logb tab in ST30.

       4.      Press Enter to call the log.