HomeMogDBMogDB StackUqbar
v2.0

Documentation:v2.0

Supported Versions:

Other Versions:

Performance Test for MogDB on Kunpeng Servers

Test Objective

This document describes the test of MogDB 2.0.0 on Kunpeng servers in scenarios where MogDB is deployed on a single node, one primary node and one standby node, or one primary node and two standby nodes (one synchronous standby and one asynchronous standby).

Test Environment

Environment Configuration

Category Server Configuration Client Configuration Quantity
CPU Kunpeng 920 Kunpeng 920 128
Memory DDR4,2933MT/s DDR4,2933MT/s 2048 GB
Hard disk Nvme 3.5T Nvme 3T 4
File system Xfs Xfs 4
OS openEuler 20.03 (LTS) Kylin V10
Database MogDB 1.1.0 software package

Test Tool

Name Function
BenchmarkSQL 5.0 Open-source BenchmarkSQL developed based on Java is used for TPC-C test of OLTP database. It is used to evaluate the database transaction processing capability.

Test Procedure

MogDB Database Operation

  1. Obtain the database installation package.

  2. Install the database.

  3. Create TPCC test user and database.

    create user [username] identified by ‘passwd’;
    grant [origin user] to [username];
    create database [dbname];
  4. Disable the database and modify the postgresql.conf database configuration file by adding configuration parameters at the end of the file.

    For example, add the following parameters in the single node test:

    max_connections = 4096
    
    allow_concurrent_tuple_update = true
    
    audit_enabled = off
    
    checkpoint_segments = 1024
    
    cstore_buffers =16MB
    
    enable_alarm = off
    
    enable_codegen = false
    
    enable_data_replicate = off
    
    full_page_writes  = off
    
    max_files_per_process = 100000
    
    max_prepared_transactions = 2048
    
    shared_buffers = 350GB
    
    use_workload_manager = off
    
    wal_buffers = 1GB
    
    work_mem = 1MB
    
    log_min_messages  = FATAL
    
    transaction_isolation = 'read committed'
    
    default_transaction_isolation = 'read committed'
    
    synchronous_commit = on
    
    fsync  = on
    
    maintenance_work_mem = 2GB
    
    vacuum_cost_limit = 2000
    
    autovacuum = on
    
    autovacuum_mode = vacuum
    
    autovacuum_max_workers = 5
    
    autovacuum_naptime = 20s
    
    autovacuum_vacuum_cost_delay =10
    
    xloginsert_locks = 48
    
    update_lockwait_timeout =20min
    
    enable_mergejoin = off
    
    enable_nestloop = off
    
    enable_hashjoin = off
    
    enable_bitmapscan = on
    
    enable_material = off
    
    wal_log_hints = off
    
    log_duration = off
    
    checkpoint_timeout = 15min
    
    enable_save_datachanged_timestamp =FALSE
    
    enable_thread_pool = on
    
    thread_pool_attr =  '812,4,(cpubind:0-27,32-59,64-91,96-123)'
    
    enable_double_write = on
    
    enable_incremental_checkpoint = on
    
    enable_opfusion = on
    
    advance_xlog_file_num = 10
    
    numa_distribute_mode = 'all'
    
    track_activities = off
    
    enable_instr_track_wait = off
    
    enable_instr_rt_percentile = off
    
    track_counts =on
    
    track_sql_count = off
    
    enable_instr_cpu_timer = off
    
    plog_merge_age = 0
    
    session_timeout = 0
    
    enable_instance_metric_persistent = off
    
    enable_logical_io_statistics = off
    
    enable_user_metric_persistent =off
    
    enable_xlog_prune = off
    
    enable_resource_track = off
    
    instr_unique_sql_count = 0
    
    enable_beta_opfusion = on
    
    enable_beta_nestloop_fusion = on
    
    autovacuum_vacuum_scale_factor = 0.02
    
    autovacuum_analyze_scale_factor = 0.1
    
    client_encoding = UTF8
    
    lc_messages = en_US.UTF-8
    
    lc_monetary = en_US.UTF-8
    
    lc_numeric = en_US.UTF-8
    
    lc_time = en_US.UTF-8
    
    modify_initial_password = off
    
    ssl = off
    
    enable_memory_limit = off
    
    data_replicate_buffer_size = 16384
    
    max_wal_senders = 8
    
    log_line_prefix = '%m %u %d %h %p %S'
    
    vacuum_cost_limit = 10000
    
    max_process_memory = 12582912
    
    recovery_max_workers = 1
    
    recovery_parallelism = 1
    
    explain_perf_mode = normal
    
    remote_read_mode = non_authentication
    
    enable_page_lsn_check = off
    
    pagewriter_sleep = 100

BenchmarkSQL Operation

  1. Modify the configuration file.

    Open the BenchmarkSQL installation directory and find the [config file] configuration file in the run directory.

    db=postgres
    
    driver=org.postgresql.Driver
    
    conn=jdbc:postgresql://[ip:port]/tpcc?prepareThreshold=1&batchMode=on&fetchsize=10
    
    user=[user]
    
    password=[passwd]
    
    warehouses=1000
    
    loadWorkers=80
    
    terminals=812
    
    //To run specified transactions per terminal- runMins must equal zero
    
    runTxnsPerTerminal=0
    
    //To run for specified minutes- runTxnsPerTerminal must equal zero
    
    runMins=30
    
    //Number of total transactions per minute
    
    limitTxnsPerMin=0
    
    //Set to true to run in 4.x compatible mode. Set to false to use the
    
    //entire configured database evenly.
    
    terminalWarehouseFixed=false  #true
    
    //The following five values must add up to 100
    
    //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
    
    newOrderWeight=45
    
    paymentWeight=43
    
    orderStatusWeight=4
    
    deliveryWeight=4
    
    stockLevelWeight=4
    
    // Directory name to create for collecting detailed result data.
    
    // Comment this out to suppress.
    
    //resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
    
    //osCollectorScript=./misc/os_collector_linux.py
    
    //osCollectorInterval=1
    
    //osCollectorSSHAddr=tpcc@127.0.0.1
    
    //osCollectorDevices=net_eth0 blk_sda blk_sdg blk_sdh blk_sdi blk_sdj
  2. Run runDataBuild.sh to generate data.

    ./runDatabaseBuild.sh [config file]
  3. Run runBenchmark.sh to test the database.

    ./runBenchmark.sh [config file]

OS Configuration

  1. Modify PAGESIZE of the OS kernel (required only in EulerOS).

    Install kernel-4.19.36-1.aarch64.rpm (*).

    rpm -Uvh --force --nodeps kernel-4.19.36-1.aarch64.rpm
    
    # This file is based on the kernel package of linux 4.19.36. You can acquire it from the following directory:
    
    #10.44.133.121 (root/Huawei12#$)
    
    # /data14/xy_packages/kernel-4.19.36-1.aarch64.rpm

    Modify the boot options of the root in the OS kernel configuration file.

    vim /boot/efi/EFI/euleros/grubenv  #Back up the grubenv file before modification.
    
    # GRUB Environment Block
    
    saved_entry=EulerOS (4.19.36) 2.0 (SP8)   -- Changed to 4.19.36

File System

  1. Change the value of blocksize to 8K in the XFS file system.

    Run the following commands to check the attached NVME disks:

    df -h | grep nvme
    
    /dev/nvme0n1         3.7T  2.6T  1.2T  69% /data1
    
    /dev/nvme1n1         3.7T  1.9T  1.8T  51% /data2
    
    /dev/nvme2n1         3.7T  2.2T  1.6T  59% /data3
    
    /dev/nvme3n1         3.7T  1.4T  2.3T  39% /data4
    
    # Run the xfs_info command to query information about NVME disks.
    
    xfs_info /data1
  2. Back up the required data.

  3. Format the disk.

    Use the /dev/nvme0n1 disk and /data1 load point as an example and run the folowing commands:
    
    umount /data1
    
    mkfs.xfs -b size=8192 /dev/nvme0n1 -f
    
    mount /dev/nvme0n1 /data1

Test Items and Conclusions

Test Result Summary

Test Item Data Volume Concurrent Transactions Average CPU Usage IOPS IO Latency Write Ahead Logs tpmC Test Time (Minute)
Single node 100 GB 500 77.49% 17.96K 819.05 us 13260 1567226.12 10
One primary node and one standby node 100 GB 500 57.64% 5.31K 842.78 us 13272 1130307.87 10
One primary node and two standby nodes 100 GB 500 60.77% 5.3K 821.66 us 14324 1201560.28 10

Single Node

  • tpmC

    image-20210223124808728

  • System data

    image-20210223124821200

One Primary Node and One Standby Node

  • tpmC

    image-20210223124857572

  • System data

    image-20210223124910034

One Primary Node and Two Standby Nodes

  • tpmC

    image-20210223124936746

  • System data

    image-20210223124946485

Copyright © 2011-2024 www.enmotech.com All rights reserved.