HomeMogDBMogDB StackUqbar
v1.1

Documentation:v1.1

Supported Versions:

Other Versions:

Installing a Database

Prepare the Configuration File

A configuration file used for describing the cluster topology is required when you install a database or perform the scale-out operation using PTK. Two methods can be used to generate a configuration file. One is to modify the configuration sample, and the other is to automatically generate a configuration file using the template command. For details, see Creating a Configuration File.

The following is an example of a configuration file with one primary node and one standby node:

Assuming the file name is config.yaml

global:
  cluster_name: c1
  user: omm
  db_port: 26000
  base_dir: /opt/mogdb
  ssh_option:
      port: 22
      user: root
      key_file: ~/.ssh/id_rsa

db_servers:
  - host: 192.168.100.100
    role: primary
    # ssh_option:
    #   port: 22
    #   user: root
    #   key_file: ~/.ssh/id_rsa
  - host: 192.168.100.101
    role: standby
    # ssh_option:
    #   port: 22
    #   user: root
    #   key_file: ~/.ssh/id_rsa

The global section of the configuration file specifies the unique name of the cluster c1, the system user omm to run the database, the port of the database 26000, the base directory where the database will be installed /opt/mogdb, and the SSH authentication information to connect to the target server.

The db_servers section of the configuration file configures the list of instances of the database to be deployed, and for each instance, specifies at least the IP of the server and the role of the database, supported database roles include primary, standby and cascade_standby.

If all servers in the cluster have the same ssh authentication information, you can specify it once in the global section. If different servers have different login information, you need to fill out the ssh_option (the part commented out in the above example) in the db_servers section for each service information separately. When configuring SSH login information, you can choose either password login or key login. For details, refer to configuration file for how to configure the ssh_option field.

For the user in the ssh_option field of the configuration file, we recommend installing directly with the root user. Although PTK has a lot of privileges in this way, PTK will restrict its behavior as much as possible, and during the installation process, PTK will only use the root user to create common user and configure it's ulimit information, the rest of the operations will be performed by the common user created.

However, there are some scenarios where the root user is not allowed to log in remotely, in which case PTK supports the installation with a common user as well.

You need to create a system user (assuming omm) to run the database in advance and configure the ulimit information of omm user in /etc/security/limits.conf to ensure that the following requirements are met, otherwise the database may not be able to start:

omm     soft    as      unlimited
omm     hard    as      unlimited
omm     soft    nproc   unlimited
omm     hard    nproc   unlimited
omm     soft    nofile  1000000
omm     hard    nofile  1000000
omm     soft    stack   unlimited
omm     hard    stack   unlimited

Then you can configure the login information of the omm user in ssh_option.

Note that if PTK manages the cluster as a common user, some features that require sudo privileges will not be available, such as ptk cluster install-mogha.

Check System Environment

After preparing the configuration files, you need to perform an environment check of all target servers before installing the database via the checkos command to ensure that the database installation requirements are met.

The system check command is as follows, with --detail as an optional parameter:

ptk checkos -f config.yaml [--detail]

After checking the system, PTK will output the checking result in the terminal. Before installing the database, you need to make sure that there are no Abnormal status items in the result.

If there are any Abnormal items, PTK will generate a system repair script in the current directory (root_fix_os prefixed file), the repair script may involve commands that require superuser privileges, if your configuration file provides superuser for SSH login, you can use the following commands to perform a quick repair:

ptk exec -f config.yaml -s <root_fix_os script>

Or you can copy the repair script to each server and execute it manually. The script contains repair commands for all nodes and will automatically execute the corresponding repair commands based on the IP of the currently running server.

The repair script generated by PTK can fix the abnormality of environment configuration items. If it is a port conflict error, you need to check whether the port is occupied and deal with it manually.

Install MogDB

Once the system check is complete, you are ready to use PTK for database installation, which requires only the following command:

$ ptk install -f config.yaml [-y] [-p <package>] [--db-version <MogDB version>]

Note: Interactive conformation is required during installation. You can add the -y parameter to configure automatic confirmation. If the initial password is not configured in the configuration file, -y cannot skip the password setting process.

PTK will automatically download MogDB of the latest LTS (long-term support) version for installation by default.

Parameter --db-version provided by PTK allows you to install MogDB of a specified version.

You can run the ptk candidate db command to query the database versions supported by PTK.

If the machine where PTK is located does not have access to the Internet, you can download the MogDB package in advance, and specify the path to the local package during installation with the -p or --pkg parameter.

Installation Result Status List

Status Description
precheck_success Precheck is successful before installation.
precheck_failed Precheck fails before installation.
setup_db_success The database instance is successfully installed.
setup_db_failed The database instance fails installation.
setup_cm_success The CM instance is successfully installed.
setup_cm_failed The CM instance fails installation.
init_dss_failed The DSS component fails initialization.
init_dss_success The DSS component is successfully initialized.
initdb_failed The database fails initialization.
initdb_success The database is successfully initialized.
rollback_success The installation rolls back successfully.
need_rollback_manually The installation needs to be rolled back manually.
start_success The database is successfully started.
start_failed The database startup fails.
need_start_manually The database needs to be started manually.

Description of PTK Optimized Parameters

For databases installed using PTK, PTK will automatically optimize some parameters based on the the configuration of the server where the database resides by default. If you do not want PTK to optimize the parameters, you can specify the --default-guc parameter during installation so that PTK maintains the database default values of parameters except for the configuration of mandatory parameters during installation.

Mandatory Parameters

The following are default parameters set during database installation.

Parameter Value
local_bind_address host of db_servers in the PTK configuration file
port db_port of db_servers in the PTK configuration file
application_name Node name distributed by PTK to an instance, such as db_6001
available_zone az_name of db_servers in the PTK configuration file
unix_socket_directory tmp_dir in the PTK configuration file
unix_socket_permissions 0700
log_file_mode 0600
ssl off
ssl_cert_file server.crt
ssl_key_file server.key
ssl_ca_file cacert.pem
log_directory pg_log/$application_name in $log_dir in the PTK configuration file
audit_directory pg_audit/$application_name in $log_dir in the PTK configuration file
listen_addresses *
max_wal_senders 16

If there is only one database instance in the cluster, set most_available_sync=on to enable the most availability mode.

Optimized Parameters

If the memory of the server in which an instance is to be deployed is greater than 4 GB, PTK will configure some recommended values by default to optimize the database. (If you do not need automatic optimization, disable it using --default-guc during installation.)

Note: If the server memory is less than 4 GB, the following parameters are not modified.

The following lists the optimized parameters which are consistent with those in Recommended Parameters Settings on the official website of MogDB.

For PTK 1.2.0 and higher versions, you can query the recommended parameter list for a specific version with the following command

ptk rec-guc <DB-VERSION>

# Example 1: If target version is 5.0.5
# ptk rec-guc 5.0.5
# Example 2: Query the list comparison between version 3.0.5 and 5.0.5 (Note: Comparison only supports 2 versions)
# ptk rec-guc 3.0.5 5.0.3
  • 4 < N <= 8
Parameter Value
max_connections 500
max_prepared_transactions 500
max_process_memory (0.6 x N) GB
shared_buffers (0.2 x N) GB
work_mem 16 MB
maintenance_work_mem 512 MB
wal_buffers 128 MB
  • 8 < N <= 64
Parameter Value
max_connections 1000
max_prepared_transactions 1000
max_process_memory (0.7 x N) GB
shared_buffers (0.2 x N) GB
work_mem 32 MB
maintenance_work_mem 1 GB
wal_buffers 512 MB
  • N > 64
Parameter Value
max_connections 3000
max_prepared_transactions 3000
max_process_memory (0.8 x N) GB
shared_buffers (0.3 x N) GB
work_mem 64 MB
maintenance_work_mem 2 GB
wal_buffers 1 GB

Part Two: Optimized Parameters

Parameter Value
remote_read_mode non_authentication
password_encryption_type 1
password_reuse_time 0
password_lock_time 0
password_effect_time 0
session_timeout 0
modify_initial_password off
wal_level logical
full_page_writes off
wal_log_hints off
xloginsert_locks 48
advance_xlog_file_num 10
wal_keep_segments 1024
most_available_sync on
catchup2normal_wait_time 0
enable_slot_log on
max_replication_slots 32
wal_receiver_timeout 60s
sync_config_strategy none_node
log_line_prefix '%m %u %d %r %p %S'
log_checkpoints on
vacuum_cost_limit 1000
autovacuum_max_workers 10
autovacuum_naptime 20s
autovacuum_vacuum_cost_delay 10
autovacuum_vacuum_scale_factor 0.05
autovacuum_analyze_scale_factor 0.02
autovacuum_vacuum_threshold 200
autovacuum_analyze_threshold 200
autovacuum_io_limits 104857600
instr_unique_sql_count 200000
enable_wdr_snapshot on
log_min_duration_statement 200
track_activity_query_size 2048
enable_instr_rt_percentile off
cstore_buffers 16 MB
local_syscache_threshold 32 MB
standby_shared_buffers_fraction 1
checkpoint_segments 1024
checkpoint_completion_target 0.8
max_files_per_process 100000
behavior_compat_options display_leading_zero
lc_messages en_US.UTF-8
lc_monetary en_US.UTF-8
lc_numeric en_US.UTF-8
lc_time en_US.UTF-8
enable_opfusion off

Resource Pooling Deployment

Resource pooling deployment enables the primary and standby nodes to share the same storage, implementing resource pooling HA deployment based on disk arrays. This solves the problem that the storage capacity is doubled compared with that of a single server in traditional HA deployment. In addition, the standby server supports real-time consistent read.

The following figure shows the resource pooling architecture.

Resource pooling architecture

PTK supports resource pooling deployment, due to the new DSS process in the resource pooling architecture, and relies on CM for high availability management of the cluster, so you need to add DSS and CM related configuration in the configuration file, the following is a sample.

global:
    cluster_name: c1
    user: omm
    group: omm
    db_password: pTk6Y2I1OWFiZjI8RT1FPUE9RFZfVTUzTy0zUlR5TDZUTmh1STE4VkQ0T2RDYXBNMThoSlNhZDY3UHB4WVE=
    db_port: 26000
    base_dir: /data/mogdb
    ssh_option:
        port: 22
        user: root
        password: pTk6ZDJmZjZlZjI8RT1FPUE9RUs4WEh1OUJRQURfM29GUjZyb3gzZGU3LVRmLTM3Mm5ySmZ5anJXVkNOTUE=
    cm_option:
        dir: /data/mogdb/cm
        cm_server_port: 15300
        db_service_vip: ""
    dss_option:
        dir: /data/mogdb/dss
        dss_port: 26010
        dms_port: 26020
        data_vg_name: data
        data_vg_path: /dev/data_shared
        inst_vg_map:
          172.23.1.28: /dev/data_private_28
          172.23.1.45: /dev/data_private_45
        cm_share_vg_path: /dev/cm_shared
        cm_voting_vg_path: /dev/cm_vote
        enable_ssl: on
db_servers:
    - host: 172.23.1.28
      role: primary
    - host: 172.23.1.45
      role: standby

The rest of the configuration is the same as a normal installation, let's focus on the dss_option part.

Field Description
dir Installation directory for DSS.
dss_port The port on which the dssserver process listens when it is running, the default value is 26010.
dms_port The port for communication between database kernels when resource pooling architecture is used, default value is 26020.
data_vg_name Customized shared data volume name, defaults to data.
data_vg_path The path to the shared data volume disk device.
inst_vg_map Private disk junction mapping for each database instance, where the key is the database instance IP and the value is the private data disk path for the corresponding instance.
cm_share_vg_path CM shared disk device path.
cm_voting_vg_path CM voting disk device path..
enable_ssl Whether to enable SSL encrypted communication.

After preparing the configuration file, the installation procedure is the same as the normal cluster installation above, just add the --enable-dss parameter to the install command:

ptk install -f config.yaml --enable-dss
Copyright © 2011-2024 www.enmotech.com All rights reserved.