During the benchmark run, all relevant performance parameters of an SAP system are monitored, for example, CPU utilization, memory consumption, the I/O system, network load, as well as functional errors and system availability. As a result, "special tunings" to achieve better results are impossible.
The output files are thoroughly analyzed for any divergence from the expected behavior. These technology checks result in a benchmark certification. The benchmark run takes at least 15 minutes. The throughput results are then extrapolated to one hour. Each SAP Standard Application Benchmark consists of a number of script files that simulate typical and popular transactions and workflow in a particular business scenario. The benchmark also has a predefined SAP client database that contains sample company data against which the benchmark is run.
In most benchmarks, online users (or dialog users) are simulated who complete business transaction step-by-step, that is, in dialog steps. The user think time is set to allow ten seconds between dialog steps. This closely approximates the behavior of an experienced power user like a call center agent.
In the ramp-up phase of these benchmark runs, the number of concurrently working users is increased until the dialog response time approaches the absolute limit of two seconds. The benchmark run continues with the high load phase, which is the actual interval considered for the benchmark certification. Upon completion of the benchmark, the technology partner who ran the benchmark sends the benchmark output files to SAP for certification on behalf of the SAP Benchmark Council.
The results of the benchmarks are expressed in the number of fully processed business line items. For the benchmarking community, the business figures are more significant and meaningful as they represent more concrete values than the more variable user measures. A fully processed business item can be a measure such as the number of orders, the number of goods movements, or the number of assembly orders. For accuracy and consistency, all benchmarks, dialog and batch (background) alike, contain application-specific throughput figures as their key performance indicators.
All benchmarks can run on all available configurations, including:
Most benchmarks are certified for the two-tier and the three-tier configuration. The limiting factor for all benchmark configurations is the CPU of the database server.