HomeMogDBMogDB StackUqbar
v1.1

Documentation:v1.1

Supported Versions:

Other Versions:

MOT Performance Benchmarks

Our performance tests are based on the TPC-C Benchmark that is commonly used both by industry and academia.

Ours tests used BenchmarkSQL (see MOT Sample TPC-C Benchmark) and generates the workload using interactive SQL commands, as opposed to stored procedures.

img NOTE: Using the stored procedures approach may produce even higher performance results because it involves significantly less networking roundtrips and database envelope SQL processing cycles.

All tests that evaluated the performance of MogDB MOT vs DISK used synchronous logging and its optimized group-commit=on version in MOT.

Finally, we performed an additional test in order to evaluate MOT's ability to quickly and ingest massive quantities of data and to serve as an alternative to a mid-tier data ingestion solutions.

All tests were performed in June 2020.

The following shows various types of MOT performance benchmarks.

MOT Hardware

The tests were performed on servers with the following configuration and with 10Gbe networking -

  • ARM64/Kunpeng 920-based 2-socket servers, model Taishan 2280 v2 (total 128 Cores), 800GB RAM, 1TB NVMe disk. OS: openEuler

  • ARM64/Kunpeng 960-based 4-socket servers, model Taishan 2480 v2 (total 256 Cores), 512GB RAM, 1TB NVMe disk. OS: openEuler

  • x86-based Dell servers, with 2-sockets of Intel Xeon Gold 6154 CPU @ 3GHz with 18 Cores (72 Cores, with hyper-threading=on), 1TB RAM, 1TB SSD OS: CentOS 7.6

  • x86-based SuperMicro server, with 8-sockets of Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz 24 cores (total 384 Cores, with hyper-threading=on), 1TB RAM, 1.2TB SSD (Seagate 1200 SSD 200GB, SAS 12Gb/s). OS: Ubuntu 16.04.2 LTS

  • x86-based Huawei server, with 4-sockets of Intel(R) Xeon(R) CPU E7-8890 v4 2.2Ghz (total 96 Cores, with hyper-threading=on), 512GB RAM, SSD 2TB OS: CentOS 7.6

MOT Results - Summary

MOT provides higher performance than disk-tables by a factor of 2.5x to 4.1x and reaches 4.8 million tpmC on ARM/Kunpeng-based servers with 256 cores. The results clearly demonstrate MOT's exceptional ability to scale-up and utilize all hardware resources. Performance jumps as the quantity of CPU sockets and server cores increases.

MOT delivers up to 30,000 tpmC/core on ARM/Kunpeng-based servers and up to 40,000 tpmC/core on x86-based servers.

Due to a more efficient durability mechanism, in MOT the replication overhead of a Primary/Secondary High Availability scenario is 7% on ARM/Kunpeng and 2% on x86 servers, as opposed to the overhead in disk tables of 20% on ARM/Kunpeng and 15% on x86 servers.

Finally, MOT delivers 2.5x lower latency, with TPC-C transaction response times of 2 to 7 times faster.

MOT High Throughput

The following shows the results of various MOT table high throughput tests.

ARM/Kunpeng 2-Socket 128 Cores

Performance

The following figure shows the results of testing the TPC-C benchmark on a Huawei ARM/Kunpeng server that has two sockets and 128 cores.

Four types of tests were performed -

  • Two tests were performed on MOT tables and another two tests were performed on MogDB disk-based tables.
  • Two of the tests were performed on a Single node (without high availability), meaning that no replication was performed to a secondary node. The other two tests were performed on Primary/Secondary nodes (with high availability), meaning that data written to the primary node was replicated to a secondary node.

MOT tables are represented in orange and disk-based tables are represented in blue.

Figure 1 ARM/Kunpeng 2-Socket 128 Cores - Performance Benchmarks

arm-kunpeng-2-socket-128-cores-performance-benchmarks

The results showed that:

  • As expected, the performance of MOT tables is significantly greater than of disk-based tables in all cases.
  • For a Single Node - 3.8M tpmC for MOT tables versus 1.5M tpmC for disk-based tables
  • For a Primary/Secondary Node - 3.5M tpmC for MOT tables versus 1.2M tpmC for disk-based tables
  • For production grade (high-availability) servers (Primary/Secondary Node) that require replication, the benefit of using MOT tables is even more significant than for a Single Node (without high-availability, meaning no replication).
  • The MOT replication overhead of a Primary/Secondary High Availability scenario is 7% on ARM/Kunpeng and 2% on x86 servers, as opposed to the overhead of disk tables of 20% on ARM/Kunpeng and 15% on x86 servers.

Performance per CPU core

The following figure shows the TPC-C benchmark performance/throughput results per core of the tests performed on a Huawei ARM/Kunpeng server that has two sockets and 128 cores. The same four types of tests were performed (as described above).

Figure 2 ARM/Kunpeng 2-Socket 128 Cores - Performance per Core Benchmarks

arm-kunpeng-2-socket-128-cores-performance-per-core-benchmarks

The results showed that as expected, the performance of MOT tables is significantly greater per core than of disk-based tables in all cases. It also shows that for production grade (high-availability) servers (Primary/Secondary Node) that require replication, the benefit of using MOT tables is even more significant than for a Single Node (without high-availability, meaning no replication).

ARM/Kunpeng 4-Socket 256 Cores

The following demonstrates MOT's excellent concurrency control performance by showing the tpmC per quantity of connections.

Figure 3 ARM/Kunpeng 4-Socket 256 Cores - Performance Benchmarks

arm-kunpeng-4-socket-256-cores-performance-benchmarks

The results show that performance increases significantly even when there are many cores and that peak performance of 4.8M tpmC is achieved at 768 connections.

x86-based Servers

  • 8-Socket 384 Cores

The following demonstrates MOT’s excellent concurrency control performance by comparing the tpmC per quantity of connections between disk-based tables and MOT. This test was performed on an X86 server with eight sockets and 384 cores. The orange represents the results of the MOT table.

Figure 4 x86 8-Socket 384 Cores - Performance Benchmarks

x86-8-socket-384-cores-performance-benchmarks

The results show that MOT tables significantly outperform disk-based tables and have very highly efficient performance per core on a 386 core server, reaching over 3M tpmC / core.

  • 4-Socket 96 Cores

3.9 million tpmC was achieved by MOT on this 4-socket 96 cores server. The following figure shows a highly efficient MOT table performance per core reaching 40,000 tpmC / core.

Figure 5 4-Socket 96 Cores - Performance Benchmarks

4-socket-96-cores-performance-benchmarks

MOT Low Latency

The following was measured on ARM/Kunpeng 2-socket server (128 cores). The numbers scale is milliseconds (ms).

Figure 1 Low Latency (90th%) - Performance Benchmarks

img

MOT's average transaction speed is 2.5x, with MOT latency of 10.5 ms, compared to 23-25ms for disk tables.

img NOTE: The average was calculated by taking into account all TPC-C 5 transaction percentage distributions. For more information, you may refer to the description of TPC-C transactions in the MOT Sample TPC-C Benchmark section.

Figure 2 Low Latency (90th%, Transaction Average) - Performance Benchmarks

img

MOT RTO and Cold-Start Time

High Availability Recovery Time Objective (RTO)

MOT is fully integrated into MogDB, including support for high-availability scenarios consisting of primary and secondary deployments. The WAL Redo Log's replication mechanism replicates changes into the secondary database node and uses it for replay.

If a Failover event occurs, whether it is due to an unplanned primary node failure or due to a planned maintenance event, the secondary node quickly becomes active. The amount of time that it takes to recover and replay the WAL Redo Log and to enable connections is also referred to as the Recovery Time Objective (RTO).

The RTO of MogDB, including the MOT, is less than 10 seconds.

img NOTE: The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster in order to avoid unacceptable consequences associated with a break in continuity. In other words, the RTO is the answer to the question: "How much time did it take to recover after notification of a business process disruption?"

In addition, as shown in the MOT High Throughput section in MOT the replication overhead of a Primary/Secondary High Availability scenario is only 7% on ARM/Kunpeng servers and 2% on x86 servers, as opposed to the replication overhead of disk-tables, which is 20% on ARM/Kunpeng and 15% on x86 servers.

Cold-Start Recovery Time

Cold-start Recovery time is the amount of time it takes for a system to become fully operational after a stopped mode. In memory databases, this includes the loading of all data and indexes into memory, thus it depends on data size, hardware bandwidth, and on software algorithms to process it efficiently.

Our MOT tests using ARM servers with NVMe disks demonstrate the ability to load 100 GB of database checkpoint in 40 seconds (2.5 GB/sec). Because MOT does not persist indexes and therefore they are created at cold-start, the actual size of the loaded data + indexes is approximately 50% more. Therefore, can be converted to MOT cold-start time of Data + Index capacity of 150GB in 40 seconds, or 225 GB per minute (3.75 GB/sec).

The following figure demonstrates cold-start process and how long it takes to load data into a MOT table from the disk after a cold start.

Figure 1 Cold-Start Time - Performance Benchmarks

cold-start-time-performance-benchmarks

  • Database Size - The total amount of time to load the entire database (in GB) is represented by the blue line and the TIME (sec) Y axis on the left.
  • Throughput - The quantity of database GB throughput per second is represented by the orange line and the Throughput GB/sec Y axis on the right.

img NOTE: The performance demonstrated during the test is very close to the bandwidth of the SSD hardware. Therefore, it is feasible that higher (or lower) performance may be achieved on a different platform.

MOT Resource Utilization

The following figure shows the resource utilization of the test performed on a x86 server with four sockets, 96 cores and 512GB RAM server. It demonstrates that a MOT table is able to efficiently and consistently consume almost all available CPU resources. For example, it shows that almost 100% CPU percentage utilization is achieved for 192 cores and 3.9M tpmC.

  • tmpC - Number of TPC-C transactions completed per minute is represented by the orange bar and the tpmC Y axis on the left.
  • CPU % Utilization - The amount of CPU utilization is represented by the blue line and the CPU % Y axis on the right.

Figure 1 Resource Utilization - Performance Benchmarks

resource-utilization-performance-benchmarks

MOT Data Ingestion Speed

This test simulates realtime data streams arriving from massive IoT, cloud or mobile devices that need to be quickly and continuously ingested into the database on a massive scale.

  • The test involved ingesting large quantities of data, as follows -

    • 10 million rows were sent by 500 threads, 2000 rounds, 10 records (rows) in each insert command, each record was 200 bytes.
    • The client and database were on different machines. Database server - x86 2-socket, 72 cores.
  • Performance Results

    • Throughput - 10,000 Records/Core or 2 MB/Core.

    • Latency - 2.8ms per a 10 records bulk insert (includes client-server networking)

      img CAUTION: We are projecting that multiple additional, and even significant, performance improvements will be made by MOT for this scenario. Click MOT Usage Scenarios for more information about large-scale data streaming and data ingestion.

Copyright © 2011-2024 www.enmotech.com All rights reserved.