HomeMogDBMogDB StackUqbar
v3.0

Documentation:v3.0

Supported Versions:

Upgrade Guide

Overview

This document provides guidance on version upgrade and rollback process. It also offers common problem resolving and troubleshooting methods.

Intended Audience

This document is mainly intended for upgrade operators. They must have the following experience and skills:

  • Be familiar with the networking of the current network and versions of related NEs (network elements).
  • Have maintenance experience of the related devices and be familiar with their operation and maintenance methods.

Upgrade Scheme

This section provides guidance on selection of the upgrade modes.

The user determines whether to upgrade the current system according to the new features of MogDB and database situations.

The supported upgrade modes include in-place upgrade and gray upgrade. The upgrade strategies include major upgrade and minor upgrade.

After the upgrade mode is determined, the system will automatically determine and choose the suitable upgrade strategy.

  • In-place upgrade: All services need to be stopped during the upgrade. All nodes are upgraded at a time.

  • Gray upgrade: supports full-service operations. All nodes are also upgraded at a time. (Currently, only the gray upgrade from version 1.1.0 to 2.0 and above is supported.)

Version Requirements Before the Upgrade (Upgrade Path)

Table 1 lists the MogDB upgrade version requirements.

Table 1 Version requirements before the upgrade (upgrade path)

Version Description
MogDB 2.0 Need to upgrade to MogDB 2.1, then upgrade to MogDB 3.0
MogDB 2.1 Upgradeable to MogDB 3.0

img NOTE: You can run the following command to check the version before the upgrade:

gsql -V | --version

Impact and Constraints

The following precautions need to be considered during the upgrade:

  • The upgrade cannot be performed with capacity expansion and reduction concurrently.
  • VIP (virtual IP) is not supported.
  • During the upgrade, you are not allowed to modify the wal_level, max_connections, max_prepared_transactions, and max_locks_per_transaction GUC parameters. Otherwise, the instance will be started abnormally after rollback.
  • It is recommended that the upgrade is performed when the database system is under the light workload. You can determine the off-peak hours according to your experience, such as holidays and festivals.
  • Before the upgrade, make sure that the database is normal. You can run the gs_om -t status command to check the database status. If the returned value of cluster_state is Normal, the database is normal.
  • Before the upgrade, make sure that mutual trust is established between database nodes. You can run the ssh hostname command on any node to connect to another node to verify whether the mutual trust has been established. If mutual connection between any two nodes does not require a password, the mutual trust is normal. (Generally, when the database status is normal, mutual trust is normal.)
  • Before and after the upgrade, the database deployment mode must be kept consistent. Before the upgrade, the database deployment mode will be verified. If it is changed after the upgrade, an error will occur.
  • Before the upgrade, make sure that the OS is normal. You can check the OS status using the gs_checkos tool.
  • In-place upgrade requires stopping of services. Gray upgrade supports full-service operations.
  • The database is running normally and the data of the primary domain name (DN) is fully synchronized to the standby DN.
  • During the upgrade, the kerberos is not allowed to be enabled.
  • You are not allowed to modify the version.cfg file decompressed from the installation package.
  • During the upgrade, if an error causes upgrade failure, you need to perform rollback operations manually. The next upgrade can be performed only after the rollback is successful.
  • After the rollback, if the next upgrade is successful, GUC parameters set before the upgrade is submitted will become invalid.
  • During the upgrade, you are not allowed to set GUC parameters manually.
  • During the gray upgrade, service interruption will occur and lasts less than 10s.
  • During the upgrade, OM operations can be performed only when the kernel and OM versions are consistent. This consistency refers that the kernel code and OM code are from the same software package. If the pre-installation script of the installation package is executed but the upgrade is not performed, or the pre-installation script of the baseline package after the rollback is not performed, the kernel code will be inconsistent with the OM code.
  • During the upgrade, if new fields are added to a system table but they cannot be found by running the \d command after the upgrade, you can run the select command to check the new fields.
  • The upgrade is allowed only when the value of enable_stream_replication is on.
  • During the gray upgrade, the number of concurrent read/write services must be less than 200.
  • If the MOT is used in MogDB 1.1.0, MogDB 1.1.0 cannot be upgraded to MogDB 2.0.

Upgrade Process

This section describes the upgrade process.

Figure 1 Upgrade process

21

imgNOTE: The time listed in the following table is for reference only. The actual time required depends on the upgrade environment.

Table 2 Estimated upgrade efficiency

Procedure Recommended Start Time Time Required (Day/Hour/Minute) Service Interruption Time Remarks
Perform the pre-upgrade preparations and check operations. One day before the upgrade About 2 to 3 hours No impact on services Pre-upgrade check, data backup, and software package verification
Perform the upgrade. Off-peak hours The time is mainly spent in starting and stopping the database and modifying the system table of each database. The upgrade usually takes less than 30 minutes. The service interruption time is the same as the upgrade time. Generally, the time taken is not greater than 30 minutes. Performed based on the Upgrade Guide
Verify the upgrade. Off-peak hours About 30 minutes The service interruption time is the same as the upgrade verification time, about 30 minutes. -
Submit the upgrade. Off-peak hours The upgrade submission usually takes less than 10 minutes. The service interruption time is the same as the upgrade submission time. Generally, the time taken is not greater than 10 minutes. -
Roll back the upgrade. Off-peak hours The rollback usually takes less than 30 minutes. The service interruption time is the same as the rollback time. Generally, the time taken is not greater than 30 minutes. -

Pre-Upgrade Preparations and Check

Pre-Upgrade Preparations and Checklist

Table 3 Pre-upgrade preparations and checklist

No. Item to Be Prepared for the Upgrade Preparation Content Recommended Start Time Time Required (Day/Hour/Minute)
1 Collect node information. Obtain the name, IP address, and passwords of users root and omm of related database nodes One day before the upgrade 1 hour
2 Set remote login as user root. Set the configuration file that allows remote login as user root One day before the upgrade 2 hours
3 Back up data. For details, see the Backup and Restoration section in the Administrator Guide. One day before the upgrade The time taken varies depends on the volume of data to be backed up and the backup strategy.
4 Obtain and verify the installation package. Obtain the installation package and verify the package integrity. One day before the upgrade 0.5 hour
5 Perform the health check. Check the OS status using the gs_checkos tool One day before the upgrade 0.5 hour
6 Check the disk usage of each database node. Check the disk usage by running the df command. One day before the upgrade 0.5 hour
7 Check the database status. Check the database status using the gs_om tool. One day before the upgrade 0.5 hour

imgNOTE: Time Required varies depends on the environment, including data volume, server performance, and other factors.

Collecting Node Information

You can contact the system administrator to obtain the environment information, such as name, IP address, and passwords of users root and omm of the database node.

Table 4 Node information

No. Node Name IP Address of the Node Password of User root Password of User omm Remarks
1 - - - - -

Backing Up Data

Once the upgrade fails, services will be affected. Therefore, you need to back up data in advance so that services can be quickly restored once the risk occurs.

For details about data backup, see the Backup and Restoration section in the Administrator Guide.

Obtaining the Installation Package

You can obtain the installation package from this website.

Performing the Health Check

The gs_checkos tool can be used to check the OS status.

Prerequisites

  • The current hardware and network environment is normal.
  • The mutual trust between the root users of all hosts is normal.
  • The gs_checkos command can be executed only as user root.

Procedure

  1. Log in to the primary database node as user root.

  2. Run the following command to check the server OS parameters:

    # gs_checkos -i A

    Checking the OS parameters aims to ensure that the database can be pre-installed normally and can be run safely and efficiently after being installed.

Checking the Disk Usage of the Database Node

It is recommended that the upgrade is performed when the disk usage of the database node is less than 80%.

Checking the Database Status

This section introduces how to check the database status.

Procedure

  1. Log in to the primary database node as user omm and run the source command to reload environment variables.

    # su - omm
    $ source /home/omm/.bashrc
  2. Run the following command to check the database status:

    gs_om -t status
  3. Ensure that the database status is normal.

Upgrade Procedure

This section introduces details about in-place upgrade and gray upgrade.

Procedure

  1. Log in to the primary database node as user root.

  2. Create a directory for storing the new package.

    # mkdir -p /opt/software/mogdb_upgrade
  3. Upload the new package to the /opt/software/mogdb_upgrade directory and decompress the package.

  4. Found the script file.

    # cd /opt/software/mogdb_upgrade/script
  5. Create static folder, put the plugin package into script/static folder. (This step is a necessary operation, because the current version does not determine whether the user has used plugins, so the default database in the upgrade script to install the use of plugins, later versions will be a separate part of the plugin split out.)

    For example:

    mkdir static
    cd static/
    wget https://cdn-mogdb.enmotech.com/mogdb-media/3.0.1/Plugins-3.0.1-CentOS-x86_64.tar.gz
  6. Before the in-place or gray upgrade, execute the pre-installation script by running the gs_preinstall command.

    # ./gs_preinstall -U omm -G dbgrp  -X /opt/software/mogdb/clusterconfig.xml
  7. Switch to user omm.

    # su - omm
  8. After ensuring that the database status is normal, run the required command to perform the in-place upgrade or gray upgrade.

    Example one: Execute the gs_upgradectl script to perform the in-place upgrade.

    gs_upgradectl -t auto-upgrade -X /opt/software/mogdb/clusterconfig.xml

    Example two: Execute the gs_upgradectl script to perform the gray upgrade.

    gs_upgradectl -t auto-upgrade -X /opt/software/mogdb/clusterconfig.xml --grey

Upgrade Verification

This section introduces upgrade verification and provides detailed use cases and operations.

Verifying the Checklist of the Project

Table 5 Verification item checklist

No. Verification Item Check Standard Check Result
1 Version check Check whether the version is correct after the upgrade. -
2 Health check Use the gs_checkos tool to check the OS status. -
3 Database status Use the gs_om tool to check the database status. -

Querying the Upgrade Version

This section introduces how to check the version.

Procedure

  1. Log in to the primary database node as user omm and run the source command to reload environment variables.

    # su - omm
    $ source /home/omm/.bashrc
  2. Run the following command to check the version information of all nodes:

    gs_ssh -c "gsql -V"

Checking the Database Status

This section introduces how to check the database status.

Procedure

  1. Log in to the primary database node as user omm.

    # su - omm
  2. Run the following command to check the database status:

    gs_om -t status

    If the value of cluster_state is Normal, the database is normal.

Upgrade Submission

After the upgrade, if the verification is successful, the subsequent operation is to submit the upgrade.

img NOTE: Once the upgrade is submitted, it cannot be rolled back.

Procedure

  1. Log in to the primary database node as user omm.

    # su - omm
  2. Run the following command to submit the upgrade:

    gs_upgradectl -t commit-upgrade  -X /opt/software/mogdb/clusterconfig.xml
  3. Reset the control file format to be compatible with the new ustore storage engine added in version 2.1.0 (2.0.1 upgrade to 2.1 only).

    img CAUTION:

    • This operation is not reversible and cannot be downgraded back to version 2.0.1 after execution.
    • Before performing this operation, it is recommended to make a full data backup by referring to Logical Backup Recovery.
    pg_resetxlog -f $PGDATA

    The echo appear as follow:

    Transaction log reset

Version Rollback

This section introduces how to roll back the upgrade.

Procedure

  1. Log in to the primary database node as user omm.

    # su - omm
  2. Run the following command to perform the rollback operation (rolling back the kernel code). After the rollback, if you need to keep the versions of the kernel code and OM code consistent, execute the pre-installation script in the old package. (For details, see the execute the pre-installtion script step.)

    gs_upgradectl -t auto-rollback  -X /opt/software/mogdb/clusterconfig.xml

    img NOTE: If the database is abnormal, run the following command to perform the forcible rollback operation:

       gs_upgradectl -t auto-rollback -X /opt/software/mogdb/clusterconfig.xml   --force
  3. Check the version after the rollback.

    gs_om -V | --version

    If the upgrade fails, perform the following operations to resolve the issue:

    a. Check whether the environment is abnormal.

    For example, the disk is fully occupied, the network is faulty, or the installation package or upgrade version is incorrect. After the problem is located and resolved, try to perform the upgrade again.

    b. If no environment issue is found or the upgrade fails again, collect related logs and contact technical engineers.

    Run the following command to collect logs:

    gs_collector -begin-time='20200724 00:00' -end-time='20200725 00:00'

    If permitted, you are advised to retain the environment.

Copyright © 2011-2024 www.enmotech.com All rights reserved.