The SAP BW Enhanced Mixed Load Benchmark (BW-EML Benchmark) meets the current demands of typical business warehouse customers. These demands are mainly coined by three major requirements:
The latest addition to the family of SAP BW Application Benchmarks – the BW-EML Benchmark – has been developed with these three customer requirements in mind.
Like its predecessor the BW-MXL Benchmark, the EML Benchmark focuses on a mix of multi user reporting load and delta data that is loaded into the database simultaneously to the queries.
The data model consists of three InfoCubes and seven DataStore objects. Each of these objects holds data of one particular year. The three InfoCubes hold the same data as the corresponding DataStore objects for the last three years. Both object types have the same set of fields. The InfoCube comes with a full set of 16 dimensions which comprise a total of 63 characteristics, having cardinalities of up to 1 million different values and one complex hierarchy. With its 30 different key figures, including key figures requiring exception aggregation, the InfoCube data model has been defined in close accordance with typical customer data models. In the DataStore object data model, the high cardinality characteristics have been defined as key members, while other characteristics have been modeled as part of the data members.
SAP BW-EML Benchmark can be executed with various different data volumes. In its smallest configuration, the benchmark rules require an initial load of a total of 500 million records (i.e. 50 million records per InfoCube / DataStore object) coming from ASCII flat files. Further possible configurations include initial load volumes of 1000 million and 2000 million records and more. Even larger data volumes can be defined for distributed server landscapes. The total record length in the ASCII files is 873 bytes. In any case, the total number of records that needs to be loaded in addition to the initial load is one thousandth of the initial records. A single benchmark run is supposed to last at least one hour. During this time, the delta data have to be loaded in small chunks every five minutes. Each InfoCube and DataStore object has to be loaded with the same number of records.
Eight reports have been defined on two MultiProviders – one MultiProvider for the three InfoCubes, and another MultiProvider for the seven DataStore objects. Since the InfoCubes and DataStore objects have the same set of fields, the respective reports on both MultiProviders are identical so that we actually have two sets of four queries each.
Reports select data for one particular year, picking the InfoCube or DataStore object containing the data randomly. Within one report further navigation steps are executed, each of them resulting in an individual query and a database access. Although the first three reports follow similar navigation patterns, the filter and drill-down operations have been randomized to address the demand for ad-hoc types of queries. While random values for filter parameters make sure that different partitions of data are accessed, a random choice of characteristic that are used for drill downs or other slice-and-dice operations makes sure that a huge number of different characteristics combinations is covered in a multi user reporting scenario. In order to guarantee a high degree of reproducibility of the reporting results, characteristics have been grouped by their respective cardinalities, and only characteristics of the same cardinality are considered for a randomized operation.
The key figure of this benchmark is the number of ad-hoc navigation steps/hour. Given the differences of the queries and data models, results of the EML benchmark cannot be compared to those of the MXL Benchmark.