HomeMogDBMogDB StackUqbar
v5.0

Documentation:v5.0

Supported Versions:

Other Versions:

TPCC Performance Tuning Guide

This section introduces the TPCC tesing method of the MogDB database and key system-level tuning depended by the database for reaching its optimal tpmC performance.


Hardware Environment

  • Server

    • The optimal TPC-C result can be achieved by using one four-path KUNPENG servers (256 CPUs, 512 to 1024 GB memory) and one two-path KUNPENG server.
    • In normal conditions, two two-path KUNPENG servers can be used (128 CPUs, 512 to 1024 GB memory).
    • Two x86 servers can also be used. But the test guide does not use NUMA optimization.
  • Hard disk

    • It is suggested to use two NVME flash cards on the database client.
    • Three to four SSDs are used.
  • NIC

    • Hi1822 NICs matched with the KUNPENG server are used.
    • It is suggested to use gigabit NICs on x86 servers.

Software Environment

  • Database: MogDB 2.1.1

  • TPCC client: Use TiDB-optimized BenchmarkSQL 5.0 (https://github.com/pingcap/benchmarksql)

  • Dependencies

    Software Suggested Version
    numactl -
    JDK 1.8.0-242
    Ant 1.10.5
    htop -

Test Procedure

  1. Install MogDB. For details, see PTK-based Installation for single-instance deployment.

  2. Initialize parameter settings and restart the database to make parameters take effect. For details, see Recommended Parameter Settings.

  3. Download the TPC-C standard test tool BenchmarkSQL 5.0.

    [root@node151 ~]# git clone -b 5.0-mysql-support-opt-2.1 https://github.com/pingcap/benchmarksql.git
    Cloning into 'tpcc-mysql'...
    remote: Enumerating objects: 106, done.
    remote: Total 106 (delta 0), reused 0 (delta 0), pack-reused 106
    Receiving objects: 100% (106/106), 64.46 KiB | 225.00 KiB/s, done.
    Resolving deltas: 100% (30/30), done.
  4. Download and install the JDK and Ant dependencies.

    [root@node151 ~]# rpm -ivh ant-1.10.5-6.oe1.noarch.rpm jdk-8u281-linux-aarch64.rpm --force --nodeps
    warning: ant-1.10.5-6.oe1.noarch.rpm: Header V3 RSA/SHA1 Signature, key ID b25e7f66: NOKEY
    warning: jdk-8u281-linux-aarch64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
    Verifying...                          ################################# [100%]
    Preparing...                          ################################# [100%]
    Updating / installing...
       1:jdk1.8-2000:1.8.0_281-fcs        ################################# [ 50%]
    Unpacking JAR files...
            tools.jar...
            rt.jar...
            jsse.jar...
            charsets.jar...
            localedata.jar...
       2:ant-0:1.10.5-6.oe1               ################################# [100%]
  5. Configure JAVA environment variables.

    [root@node151 ~]# tail -3  /root/.bashrc
    export JAVA_HOME=/usr/java/jdk1.8.0_281-aarch64
    export PATH=$JAVA_HOME/bin:$PATH
    export CLASSPATH=.:JAVA_HOME/lib:$BENCHMARKSQLPATH/run/ojdbc7.jar
  6. Open the BenchmarkSQL directory and run the ant command to compile source files.

    After successful compilation, two directories \build and \dist will generate.

    [root@node151 benchmarksql-5.0-mysql-support-opt-2.1]# pwd
    /tmp/benchmarksql-5.0-mysql-support-opt-2.1
    [root@node151 benchmarksql-5.0-mysql-support-opt-2.1]# ant
    Buildfile: /tmp/benchmarksql-5.0-mysql-support-opt-2.1/build.xml
    
    init:
        [mkdir] Created dir: /tmp/benchmarksql-5.0-mysql-support-opt-2.1/build
    
    compile:
        [javac] Compiling 12 source files to /tmp/benchmarksql-5.0-mysql-support-opt-2.1/build
    
    dist:
        [mkdir] Created dir: /tmp/benchmarksql-5.0-mysql-support-opt-2.1/dist
          [jar] Building jar: /tmp/benchmarksql-5.0-mysql-support-opt-2.1/dist/BenchmarkSQL-5.0.jar
    
    BUILD SUCCESSFUL
    Total time: 1 second
  7. Based on your system architecture, download the corresponding JDBC driver to the lib/postgresql folder in the BenchmarkSQL directory, decompress it, and delete the self-contained JDBC driver.

    [root@node151 postgres]# pwd
    /tmp/benchmarksql-5.0-mysql-support-opt-2.1/lib/postgres/
    [root@node151 postgres]# ls
    openGauss-2.0.0-JDBC.tar.gz  postgresql-9.3-1102.jdbc41.jar
    [root@node151 postgres]# rm -f postgresql-9.3-1102.jdbc41.jar
    [root@node151 postgres]# tar -xf openGauss-2.0.0-JDBC.tar.gz
    [root@node151 postgres]# ls
    openGauss-2.0.0-JDBC.tar.gz  postgresql.jar
  8. In the database, create database tpcc_db and user tpcc.

    [omm@node151 ~]$ gsql -d postgres -p 26000 -r
    postgres=# create database tpcc_db;
    CREATE DATABASE
    postgres=# \q
    [omm@node151 ~]$ gsql -d tpcc_db -p 26000 -r
    tpcc_db=# CREATE USER tpcc WITH PASSWORD "tpcc@123";
    CREATE ROLE
    tpcc_db=# GRANT ALL ON schema public TO tpcc;
    GRANT
    tpcc_db=# ALTER User tpcc sysadmin;
    ALTER ROLE
  9. On the client, open the run folder in the BenchmarkSQL directory, edit the benchmarksql configuration file, and modify test parameters, including database user name, password, IP address, port, and database.

    [root@node151 db1]# cd /tmp/benchmarksql-5.0-mysql-support-opt-2.1/run
    [root@node151 run]# vim props.mogdb
    db=postgres
    driver=org.opengauss.Driver
    conn=jdbc:opengauss://172.16.0.176:26000/tpcc_db?prepareThreshold=1&batchMode=on&fetchsize=10&loggerLevel=off #Modify the connection character string, including IP address, port, and database.
    user=tpcc #User name
    password=tpcc@123 #Password
    warehouses=100  #Number of warehouses
    terminals=300 #Number of terminals
    runMins=5  #Running time
    runTxnsPerTerminal=0
    loadWorkers=100
    limitTxnsPerMin=0
    terminalWarehouseFixed=false
    newOrderWeight=45
    paymentWeight=43
    orderStatusWeight=4
    deliveryWeight=4
    stockLevelWeight=4
  10. Initialize data.

    [root@node151 run]# sh runDatabaseBuild.sh props.mogdb
    
    # ------------------------------------------------------------
    # Loading SQL file ./sql.common/tableCreates.sql
    # ------------------------------------------------------------
    create table bmsql_config (
    cfg_name    varchar(30) primary key,
    cfg_value   varchar(50)
    );
    
    
    ......
    
    # ------------------------------------------------------------
    # Loading SQL file ./sql.postgres/buildFinish.sql
    # ------------------------------------------------------------
    -- ----
    -- Extra commands to run after the tables are created, loaded,
    -- indexes built and extra's created.
    -- PostgreSQL version.
    -- ----
    vacuum analyze;
  11. Modify the path of funcs.sh in the runBenchmark.sh file to the actual path.

    [root@node151 run]# vim runBenchmark.sh
    #!/usr/bin/env bash
    
    if [ $# -ne 1 ] ; then
        echo "usage: $(basename $0) PROPS_FILE" >&2
        exit 2
    fi
    
    SEQ_FILE="./.jTPCC_run_seq.dat"
    if [ ! -f "${SEQ_FILE}" ] ; then
        echo "0" > "${SEQ_FILE}"
    fi
    SEQ=$(expr $(cat "${SEQ_FILE}") + 1) || exit 1
    echo "${SEQ}" > "${SEQ_FILE}"
    
    source /tmp/benchmarksql-5.0-mysql-support-opt-2.1/run/funcs.sh $1  #Modify the path to the actual path in which the file is stored.
    
    setCP || exit 1
    
    myOPTS="-Dprop=$1 -DrunID=${SEQ}"
    
    java -cp "$myCP" $myOPTS jTPCC
  12. Start the test. The tpmC part shows the test result which is stored in runLog_mmdd-hh24miss.log.

    [root@node151 run]# sh runBenchmark.sh props.mogdb| tee runLog_`date +%m%d-%H%M%S`.log
    
    ...
    
    15:08:26,663 [Thread-16] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 106140.46
    15:08:26,663 [Thread-16] INFO   jTPCC : Term-00, Measured tpmTOTAL = 235800.39
    15:08:26,664 [Thread-16] INFO   jTPCC : Term-00, Session Start     = 2021-08-04 15:03:26
    15:08:26,664 [Thread-16] INFO   jTPCC : Term-00, Session End       = 2021-08-04 15:08:26
    15:08:26,664 [Thread-16] INFO   jTPCC : Term-00, Transaction Count = 1179449
    15:08:26,664 [Thread-16] INFO   jTPCC : executeTime[Payment]=29893614
    15:08:26,664 [Thread-16] INFO   jTPCC : executeTime[Order-Status]=2564424
    15:08:26,664 [Thread-16] INFO   jTPCC : executeTime[Delivery]=4438389
    15:08:26,664 [Thread-16] INFO   jTPCC : executeTime[Stock-Level]=4259325
    15:08:26,664 [Thread-16] INFO   jTPCC : executeTime[New-Order]=48509926

    Adjust props.mogdb or use several props files and perform the test several times as required.

  13. To prevent generating large volume of data due to multiple tests from affecting performance, clear the data and then start the test.

    [root@node151 run]# sh runDatabaseDestroy.sh props.mogdb
    # ------------------------------------------------------------
    # Loading SQL file ./sql.common/tableDrops.sql
    # ------------------------------------------------------------
    drop table bmsql_config;
    drop table bmsql_new_order;
    drop table bmsql_order_line;
    drop table bmsql_oorder;
    drop table bmsql_history;
    drop table bmsql_customer;
    drop table bmsql_stock;
    drop table bmsql_item;
    drop table bmsql_district;
    drop table bmsql_warehouse;
    drop sequence bmsql_hist_id_seq;

    During the tuning, you can execute the build operation once and execute the run operation multiple times. However, in the formal test, you are advised to perform the build, run, and destroy operations in sequence every time.


Tuning

1. Host Optimization (KUNPENG Exclusive)

Adjust BIOS.

  • Choose BIOS > Advanced > MISC Config, and set Support Smmu to Disabled.
  • Choose BIOS > Advanced > MISC Config, and set CPU Prefetching Configuration to Disabled.
  • Choose BIOS > Advanced > Memory Config, and set Die Interleaving to Disabled.

2. OS Optimization (KUNPENG Exclusive)

  • Change PAGESIZE of the OS kernel to 64KB (default value)

  • Disable irqbalance.

    systemctl stop irqbalance
  • Adjust numa_balance.

    echo 0 > /proc/sys/kernel/numa_balancing
  • Adjust the transparent huge page.

    echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
    echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
  • Configure the scheduling mechanism of the NVME disk IO queue.

    echo none > /sys/block/nvmen/queue/scheduler

3. File System Configuration

  • The format is xfs, and the database size is 8 KB.

    mkfs.xfs -b size=8192 /dev/nvme0n1 -f

4. Network Configuration

  • NIC multi-interrupt queue setting

    Downlaod IN500_solution_5.1.0.SPC401.zip and install hinicadm.

    [root@node151 fc]# pwd
    /root/IN500_solution_5/tools/linux_arm/fc
    [root@node151 fc]# rpm -ivh hifcadm-2.4.1.0-1.aarch64.rpm
    Verifying...                          ################################# [100%]
    Preparing...                          ################################# [100%]
            package hifcadm-2.4.1.0-1.aarch64 is already installed
    [root@node151 fc]#
  • Modify the maximum number of multi-interrupt queues supported by the system.

    [root@node151 config]# pwd
    /root/IN500_solution_5/tools/linux_arm/nic/config
    [root@node151 config]# ./hinicconfig hinic0 -f std_sh_4x25ge_dpdk_cfg_template0.ini
    [root@node151 config]# reboot
    [root@node151 config]# ethtool -L enp3s0 combined 48

    The tuning value varies depending on the platform and application. For the current platform of 128 cores, the tuning value is 12 on the server client and 48 on the client.

  • For interrupt tuning, enable the tso, lro, gro, and gso features.

    ethtool -K enp3s0 tso on
    ethtool -K enp3s0 lro on
    ethtool -K enp3s0 gro on
    ethtool -K enp3s0 gso on
  • Confirm and update the NIC firmware.

    [root@node151 ~]# ethtool -i enp3s0
    driver: hinic
    version: 2.3.2.11
    firmware-version: 2.4.1.0
    expansion-rom-version:
    bus-info: 0000:03:00.0
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: no

    The NIC firmware version is 2.4.1.0.

  • Update the NIC firmware.

    [root@node151 cfg_data_nic_prd_1h_4x25G]# pwd
    /root/IN500_solution_5/firmware/update_bin/cfg_data_nic_prd_1h_4x25G
    [root@node151 cfg_data_nic_prd_1h_4x25G]# hinicadm updatefw -i enp3s0 -f /root/IN500_solution_5/firmware/update_bin/cfg_data_nic_prd_1h_4x25G/Hi1822_nic_prd_1h_4x25G.bin

    Restart the server and confirm that the NIC firmware version is updated to 2.4.1.0.

5. Core Binding on the Database Server and Client Server

  • NUMA binding tuning on KUNPENG servers (128 cores)

    • Install NUMA on the database server and client server.

      yum install numa -y

      Install the NIC driver on the database server and client server. For details, see the Network Configuration section.

    • Database server

      bind_net_irq.sh

      cp `find /opt -name "bind*.sh"|head -1 ` /root
      sh /root/bind_net_irq.sh 12

      Set database parameters:

      thread_pool_attr = '345,4,(cpubind:1-28,32-60,64-92,96-124)'
      enable_thread_pool = on

      Stop the database.

      Change the database start command to the following:

      numactl -C 1-28,32-60,64-92,96-124 mogdb --single_node -D /opt/data/db2/ -p 26000 &
    • Client server

      Copy /root/bind_net_irq.sh to the client server.

      sh /root/bind_net_irq.sh 48

      Change the benchmark start command to the following:

      numactl -C 0-19,32-51,64-83,96-115 sh runBenchmark.sh props.mog
  • NUMA binding tuning on KUNPENG servers (256 cores)

    • Install NUMA on the database server and client server.

      yum install numa* -y

      Install the NIC driver on the database server and client server. For details, see the Network Configuration section.

    • Database server

      bind_net_irq.sh

      cp `find /opt -name "bind*.sh"|head -1 ` /root
      sh /root/bind_net_irq.sh 24

      Set database parameters:

      thread_pool_attr = '696,4,(cpubind:1-28,32-60,64-92,96-124,128-156,160-188,192-220,224-252)'
      enable_thread_pool = on

      Stop the database

      Change the database start command to the following:

      numactl -C 1-28,32-60,64-92,96-124,128-156,160-188,192-220,224-252 mogdb --single_node -D /opt/data/db2/ -p 26000 &
    • Client server

      Copy /root/bind_net_irq.sh to the client server.

      sh /root/bind_net_irq.sh 48

      Change the benchmark start command to the following:

      numactl -C 0-19,32-51,64-83,96-115 sh runBenchmark.sh props.mog

6. Database Parameter Tuning (Commonly Used Method)

Modify postgresql.conf in PGDATA and restart the database.

max_connections = 4096
allow_concurrent_tuple_update = true
audit_enabled = off
cstore_buffers =16MB
enable_alarm = off
enable_codegen = false
enable_data_replicate = off
full_page_writes = off
max_files_per_process = 100000
max_prepared_transactions = 2048
shared_buffers = 350GB
use_workload_manager = off
wal_buffers = 1GB
work_mem = 1MB
transaction_isolation = 'read committed'
default_transaction_isolation = 'read committed'
synchronous_commit = on
fsync = on
maintenance_work_mem = 2GB
vacuum_cost_limit = 2000
autovacuum = on
autovacuum_mode = vacuum
autovacuum_vacuum_cost_delay =10
xloginsert_locks = 48
update_lockwait_timeout =20min
enable_mergejoin = off
enable_nestloop = off
enable_hashjoin = off
enable_bitmapscan = on
enable_material = off
wal_log_hints = off
log_duration = off
checkpoint_timeout = 15min
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.02
enable_save_datachanged_timestamp =FALSE
log_timezone = 'PRC'
timezone = 'PRC'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
enable_double_write = on
enable_incremental_checkpoint = on
enable_opfusion = on
numa_distribute_mode = 'all'
track_activities = off
enable_instr_track_wait = off
enable_instr_rt_percentile = off
track_counts =on
track_sql_count = off
enable_instr_cpu_timer = off
plog_merge_age = 0
session_timeout = 0
enable_instance_metric_persistent = off
enable_logical_io_statistics = off
enable_user_metric_persistent =off
enable_xlog_prune = off
enable_resource_track = off
instr_unique_sql_count = 0
enable_beta_opfusion = on
enable_thread_pool = on
#0 core is used for binding the walwriter thread to the core.
enable_partition_opfusion=off
wal_writer_cpu=0
xlog_idle_flushes_before_sleep = 500000000
max_io_capacity = 2GB
dirty_page_percent_max = 0.1
candidate_buf_percent_target = 0.7
bgwriter_delay = 500
pagewriter_sleep = 30
checkpoint_segments =10240
advance_xlog_file_num = 100
autovacuum_max_workers = 20
autovacuum_naptime = 5s
bgwriter_flush_after = 256kB
data_replicate_buffer_size = 16MB
enable_stmt_track = off
remote_read_mode=non_authentication
wal_level = archive
hot_standby = off
hot_standby_feedback = off
client_min_messages = ERROR
log_min_messages = FATAL
enable_asp = off
enable_bbox_dump = off
enable_ffic_log = off
enable_twophase_commit = off
minimum_pool_size = 200
wal_keep_segments = 1025
incremental_checkpoint_timeout = 5min
max_process_memory = 12GB
vacuum_cost_limit = 10000
xloginsert_locks = 8
wal_writer_delay = 100
wal_file_init_num = 30
wal_level=minimal
max_wal_senders=0
fsync=off
synchronous_commit = off
enable_indexonlyscan=on
thread_pool_attr = '345,4,(cpubind:1-28,32-60,64-92,96-124)'
enable_page_lsn_check = off
enable_double_write = off

7. BenchmarkSQL Tuning

  • Connection string

    conn=jdbc:opengauss://10.10.10.40:26000/tpcc?prepareThreshold=1&batchMode=on&fetchsize=10&loggerLevel=off
  • Modify the file to distribute the data. Adjust FILLFACTOR and partition data disks.

    [root@node151 ~]# ls benchmarksql-5.0-mysql-support-opt-2.1/run/sql.common/tableCreates.sql
    benchmarksql-5.0-mysql-support-opt-2.1/run/sql.common/tableCreates.sql
    [root@node151 sql.common]# cat tableCreates.sql
    CREATE TABLESPACE example2 relative location 'tablespace2';
    CREATE TABLESPACE example3 relative location 'tablespace3';
    
    create table bmsql_config (
      cfg_name    varchar(30),
      cfg_value   varchar(50)
    );
    
    create table bmsql_warehouse (
      w_id        integer   not null,
      w_ytd       decimal(12,2),
      w_tax       decimal(4,4),
      w_name      varchar(10),
      w_street_1  varchar(20),
      w_street_2  varchar(20),
      w_city      varchar(20),
      w_state     char(2),
      w_zip       char(9)
    ) WITH (FILLFACTOR=80);
    
    create table bmsql_district (
      d_w_id       integer       not null,
      d_id         integer       not null,
      d_ytd        decimal(12,2),
      d_tax        decimal(4,4),
      d_next_o_id  integer,
      d_name       varchar(10),
      d_street_1   varchar(20),
      d_street_2   varchar(20),
      d_city       varchar(20),
      d_state      char(2),
      d_zip        char(9)
     ) WITH (FILLFACTOR=80);
    
    create table bmsql_customer (
      c_w_id         integer        not null,
      c_d_id         integer        not null,
      c_id           integer        not null,
      c_discount     decimal(4,4),
      c_credit       char(2),
      c_last         varchar(16),
      c_first        varchar(16),
      c_credit_lim   decimal(12,2),
      c_balance      decimal(12,2),
      c_ytd_payment  decimal(12,2),
      c_payment_cnt  integer,
      c_delivery_cnt integer,
      c_street_1     varchar(20),
      c_street_2     varchar(20),
      c_city         varchar(20),
      c_state        char(2),
      c_zip          char(9),
      c_phone        char(16),
      c_since        timestamp,
      c_middle       char(2),
      c_data         varchar(500)
    ) WITH (FILLFACTOR=80) tablespace example2;
    
    create sequence bmsql_hist_id_seq;
    
    create table bmsql_history (
      hist_id  integer,
      h_c_id   integer,
      h_c_d_id integer,
      h_c_w_id integer,
      h_d_id   integer,
      h_w_id   integer,
      h_date   timestamp,
      h_amount decimal(6,2),
      h_data   varchar(24)
    ) WITH (FILLFACTOR=80);
    
    create table bmsql_new_order (
      no_w_id  integer   not null,
      no_d_id  integer   not null,
      no_o_id  integer   not null
    ) WITH (FILLFACTOR=80);
    
    create table bmsql_oorder (
      o_w_id       integer      not null,
      o_d_id       integer      not null,
      o_id         integer      not null,
      o_c_id       integer,
      o_carrier_id integer,
      o_ol_cnt     integer,
      o_all_local  integer,
      o_entry_d    timestamp
    ) WITH (FILLFACTOR=80);
    
    create table bmsql_order_line (
      ol_w_id         integer   not null,
      ol_d_id         integer   not null,
      ol_o_id         integer   not null,
      ol_number       integer   not null,
      ol_i_id         integer   not null,
      ol_delivery_d   timestamp,
      ol_amount       decimal(6,2),
      ol_supply_w_id  integer,
      ol_quantity     integer,
      ol_dist_info    char(24)
    ) WITH (FILLFACTOR=80);
    
    create table bmsql_item (
      i_id     integer      not null,
      i_name   varchar(24),
      i_price  decimal(5,2),
      i_data   varchar(50),
      i_im_id  integer
    );
    
    create table bmsql_stock (
      s_w_id       integer       not null,
      s_i_id       integer       not null,
      s_quantity   integer,
      s_ytd        integer,
      s_order_cnt  integer,
      s_remote_cnt integer,
      s_data       varchar(50),
      s_dist_01    char(24),
      s_dist_02    char(24),
      s_dist_03    char(24),
      s_dist_04    char(24),
      s_dist_05    char(24),
      s_dist_06    char(24),
      s_dist_07    char(24),
      s_dist_08    char(24),
      s_dist_09    char(24),
      s_dist_10    char(24)
    ) WITH (FILLFACTOR=80) tablespace example3;

8. Database File Location Tuning (Commonly Used Method)

Place the primary, xlog, example2, and example3 directories in multiple underlying disks to avoid IO bottleneck. If there are only two high-performance disks, preferably move the xlog directory. If there are three high-performance disks, move the xlog and example2 directories. The example is as follows:

PGDATA=/opt/data/mogdb
cd $PGDATA
mv pg_xlog /tpccdir1
ln -s /tpccdir1/pg_xlog .
cd pg_location
mv tablespace2 /tpccdir2
ln -s /tpccdir2/tablespace2 .
mv tablespace3 /tpccdir3
ln -s /tpccdir3/tablespace3 .

9. Observation of System Resource Tools

  • htop is used to observe the CPU usage. For the ARM platform, the compilation needs to be performed from source code.

    By using htop to monitor the CPU usage on the database server and client server, it is found that the CPU usage of each service is very high, reaching more than 90% under the optimal performance test. If the CPU usage of any service does not reach the standard, the core-binding mode may be incorrect or other problem may occur. It needs to be located and modified.

  • iostat is used to check the system IO usage.

  • sar is used to check the network usage.

  • nmon is used to monitor the overall system resources.

Screenshots of part of data

  • htop result of the database server

    htop.png

  • htop result of the client server

    客户端_htop.png

  • iostat is used to check the system IO usage.

    iostat.png

  • sar is used to check the network usage.

    sar.png


Ideal Results

4-path KUNPENG server with 256 CPUs, 1000 warehouses, and 500 terminals: 250W tpmC
2-path KUNPENG server with 100 warehouses and 100 terminals: 90W tpmC
2-path KUNPENG server with 100 warehouses and 300 terminals: 140W tpmC

Copyright © 2011-2024 www.enmotech.com All rights reserved.