- About PTK
- Quick Start
- Guidance
- Preparing Configuration File
- Checking the System
- Deploy Database Cluster
- Manage Clusters
- Show Cluster List
- Show Cluster Status
- Start Database
- Stop Database
- Restart Database
- Rebuild Database
- Switchover
- Failover
- Show Plugin Information
- Install Plugins
- Upgrade Database
- Scale-out Cluster
- Scale-in Cluster
- Show Database HBA
- Set Database HBA
- Show Database Parameters
- Set Database Parameters
- Show Cluster Topology Configuration
- Show Cluster Meta Information
- Update Cluster Comment
- Update Database IP
- Role Management
- Install CM
- Uninstall CM
- Manage Cluster VIP
- Install MogHA Service
- Uninstall MogHA Service
- Rename Cluster
- Create Empty Cluster
- Throw Out A Node
- Takeover A Node
- Manage Cluster
- Uninstall Database Cluster
- Collect OS Information
- Download MogDB Installer
- Encrypt Sensitive Information
- Upgrade PTK
- PTKC
- Compatible With Higher Versions of MogDB
- Reference
- Samples of Configuration Files
- Commands
- ptk
- ptk completion
- ptk view-static-config
- ptk init-cluster
- ptk collect
- ptk rec-guc
- ptk cache
- ptk gen-ptkc
- ptk manage
- ptk demo
- ptk meta
- ptk version
- ptk self
- ptk gen-om-xml
- ptk env
- ptk gen-static-config
- ptk cluster
- ptk cluster createdb
- ptk cluster uninstall-compat-tools
- ptk cluster install-compat-tools
- ptk cluster install-mogila
- ptk cluster rename
- ptk cluster throwout
- ptk cluster takeover
- ptk cluster uninstall-cm
- ptk cluster install-cm
- ptk cluster gen-cert-files
- ptk cluster load-cm-vip
- ptk cluster del-kerberos-auth
- ptk cluster add-kerberos-auth
- ptk cluster uninstall-kerberos-server
- ptk cluster install-kerberos-server
- ptk cluster is-in-upgrade
- ptk cluster upgrade-rollback
- ptk cluster upgrade-commit
- ptk cluster upgrade
- ptk cluster demote
- ptk cluster promote
- ptk cluster refresh
- ptk cluster shell
- ptk cluster modify-comment
- ptk cluster show-config
- ptk cluster set-guc
- ptk cluster show-guc
- ptk cluster set-hba
- ptk cluster show-hba
- ptk cluster scale-out
- ptk cluster scale-in
- ptk cluster uninstall-mogha
- ptk cluster install-mogha
- ptk cluster list-plugins
- ptk cluster install-plugin
- ptk cluster inspect
- ptk cluster failover
- ptk cluster switchover
- ptk cluster build
- ptk cluster status
- ptk cluster restart
- ptk cluster stop
- ptk cluster start
- ptk uninstall
- ptk ls
- ptk install
- ptk exec
- ptk template
- ptk encrypt
- ptk checkos
- ptk download
- ptk candidate
- Troubleshooting
- FAQ
- Release Notes
- Community
- Appendix: YAML Syntax
Scale-out Cluster
Through a series of operations, add database nodes in the original cluster.
Purpose
PTK provides users with the function of cluster scale-out to meet their needs for business, cost, resource allocation, risk and other factors.
Scale-out Logic
When PTK scales-out, it packages static files such as application directories and tool directories on the primary node, copies them to the target machine and then unpacks them into the corresponding directories. Then the kernel tool is used to initialize a new data directory, and finally the cluster configuration is refreshed according to the topology of the new cluster as a whole.
Note that the scale-out is done in a node-by-node mode. When the scale-out fails at a node, PTK will stop the scale-out immediately and refresh the configuration according to the completed cluster.
Scale-out Process Demonstration
Create The Configuration File Needed for Scaling-out
Use the following command to create a scale-out configuration file
ptk template scale-out > scale-out.yaml
At this point a scale-out configuration file scale-out.yaml
has been created, modify this configuration file as needed.
The contents of the scale-out template are described below.
# New list of database servers added, with the same fields supported as in the installation
db_servers:
- host: "replace host ip here"
# Roles only support "standby" (default) or "cascade_standby"
role: standby
upstream_host: ""
ssh_option:
port: 22
user: root
password: "encrypted ssh password by ptk"
# List of CM servers
# If the cluster before scale-out has CM installed, you need to specify the list of CM servers for scale-out.
# In general, the list of CM servers is the same as the list of database servers.
# But if you scale-out only the database or only the CM on the new server, the list of machines in the two lists can be different.
cm_servers:
- host: "replace host ip here"
Perform Scale-out Operations On Clusters
Scale-out command:
ptk cluster -n <CLUSTER_NAME> scale-out -c CONFIG.yaml [--build-from BUILD_FROM_HOST] [--skip-create-user] [--skip-check-os] [--skip-check-distro] [--default-guc] [--skip-rollback] [--skip-gen-ptkc] [--cpu CPU_MODEL] [--not-limit-cm-nodes]
Options:
option name | option type | description |
---|---|---|
-c | String | Specify the configuration file to use when scaling-out |
--build-from | String | Specify the data source node from which the cascade_standby node will be built when scaling-out; if not specified, build from the primary node by default. |
--skip-create-user | Bool | Skip user creation for scale-out nodes |
--skip-check-os | Bool | Skip OS-related checks for scale-out nodes |
--skip-check-distro | Bool | Skip distro checks for scale-out nodes |
--default-guc | Bool | Use the database default parameter configuration for scale-out nodes |
--skip-rollback | Bool | Skip rollback if errors are reported during scale-out |
--skip-gen-ptkc | Bool | Skip transmitting ptkc for scale-out nodes |
--cpu | String | Specify the CPU Model of the scale-out node. |
--not-limite-cm-nodes | Bool | No limitation on the number of CM nodes, the default number of CM nodes must be odd if the number of CM nodes is greater than 3 |
Scale-out Support List
original cluster | scale-out condition | Whether to support | solution |
---|---|---|---|
db1,db2 | db3 | Yes | |
db1+cm1,db2+cm2,db3+cm3 | db4+cm4 | Yes | |
db1+cm1,db2+cm2,db3+cm3 | db4 | Yes | |
db1+cm1,db2+cm2,db3+cm3 | cm4 | Yes | |
db1+cm1,db2+cm2,db3 | cm3 | No | Scale-in db3, scale-out db3+cm3 |
db1+cm1,db2+cm2,cm3 | db3 | No | Scale-in cm3, scale-out db3+cm3 |
QA
Can a cluster be scaled-in?
Yes. Refer to Scale-in Cluster
What is the maximum number of nodes for scale-out?
Up to 9 nodes.
Can I scale-out a primary node?
No.
Can I scale-out a cluster with CM? If yes, is there a restriction?
Yes. See scale-out support list for more details.
What does --skip-rollback
do? When should it be used?
Usage: PTK performs scale-out on a node-by-node basis. If a node fails when performing a scale-out operation, it rolls back the operations that have already been performed on that node. The role of --skip-rollback
is to not roll back operations that have already been performed on a node if the scale-out fails.
When to use: Users can use this option if they want to see why the scale-out failed on the target node.
What does --skip-check-distro
do? When should it be used?
Usage: When scaling-out, PTK will check if the operating system of the node to be scaled-out is the same as the primary node by default. If it is not, the check fails. The role of --skip-check-distro
is to skip this check.
When to use:
- Misjudgment caused by incomplete judgment of the PTK on an homogeneous system.
- In a heterogeneous system with consistent MogDB packages, this option can be used to skip the check, but PTK does not guarantee the availability and correctness of the database nodes after a successful scale-out.
What does --not-limite-cm-nodes
do? When should it be used?
Usage: When PTK scales-out, if the number of CM nodes is greater than 3, it will force the number of CM nodes to be an odd number, in order to prevent the primary selection problem caused by the CM cluster brain-split. Use this option to avoid the mandatory check for an odd number of CM nodes.
When to use: It is not recommended to use this option, forcing the use of this option is likely to encounter problem with primary selection caused by brain-split of CM clusters, which can then cause CM to fail.
If CM is currently deployed in the cluster and VIPs are configured, will scale-out affect VIPs?
No. PTK refreshes CM-VIP after scale-out for clusters that have VIPs configured to ensure that the basic CM-VIP information is correct.