HomeMogDBMogDB StackUqbar

Documentation:v2.0

Supported Versions:

FAQs

Support character set conversion?

Character set conversion is supported by default. For example, the source database GBK is migrated to the destination database UTF8.

The Column length changes when performing string conversion.

Char type data needs to be noted. For example, in Oracle, Char(1) migrates to openGauss/MogDB, which will become Char(2), and the service query will have multiple spaces

Column length change?

Yes. Considering that other non-single-byte strings such as Chinese are subject to length changes due to inconsistent database encoding.

The following scenarios exist.

  • The source database column type is character migration to byte type. The length of the field will be twice as long as the original.

    • MySQL database Migration to openGauss/MogDB non-PG/B compatible mode will become 3x the original. e.g. varchar(10) becomes varchar(30)
  • Character encoding conversion is present. Database length migration from GBK to UTF-8 increases by 50% and rounds up.

Source DB Source Charset Source Type Target Type Target DB Target Charset Target datCompatibility
Oracle ZHS16GBK VARCHAR2(100 BYTE) VARCHAR(150) openGauss UTF8 A
Oracle ZHS16GBK VARCHAR2(100 CHAR) VARCHAR(300) openGauss UTF8 A
Oracle ZHS16GBK VARCHAR2(100 BYTE) VARCHAR(100) openGauss UTF8 PG/B
Oracle ZHS16GBK VARCHAR2(100 CHAR) VARCHAR(100) openGauss UTF8 PG/B
Oracle ZHS16GBK VARCHAR2(100 BYTE) VARCHAR(100) openGauss GBK A
Oracle ZHS16GBK VARCHAR2(100 CHAR) VARCHAR(200) openGauss GBK A
Oracle ZHS16GBK VARCHAR2(100 BYTE) VARCHAR(100) openGauss GBK PG/B
Oracle ZHS16GBK VARCHAR2(100 CHAR) VARCHAR(100) openGauss GBK PG/B
Oracle AL32UTF8 VARCHAR2(100 BYTE) VARCHAR(100) openGauss UTF8 A
Oracle AL32UTF8 VARCHAR2(100 CHAR) VARCHAR(300) openGauss UTF8 A
Oracle AL32UTF8 VARCHAR2(100 BYTE) VARCHAR(100) openGauss UTF8 PG/B
Oracle AL32UTF8 VARCHAR2(100 CHAR) VARCHAR(100) openGauss UTF8 PG/B
oracle ZHS16GBK CHAR(1 ) CHAR(1) openGauss UTF8 A
oracle ZHS16GBK CHAR(2 ) CHAR(3) openGauss UTF8 A
oracle ZHS16GBK CHAR(1 ) CHAR(1) openGauss UTF8 PG/B
oracle ZHS16GBK CHAR(2 ) CHAR(2) openGauss UTF8 PG/B
oracle AL32UTF8 CHAR(1 ) CHAR(1) openGauss UTF8 A
oracle AL32UTF8 CHAR(2 ) CHAR(2) openGauss UTF8 A
oracle AL32UTF8 CHAR(1 ) CHAR(1) openGauss UTF8 PG/B
oracle AL32UTF8 CHAR(2 ) CHAR(2) openGauss UTF8 PG/B
oracle ZHS16GBK NCHAR/NVARCHAR2(2) CHAR/VARCHAR(8) openGauss GBK/GB18030 A
oracle ZHS16GBK NCHAR/NVARCHAR2(2) CHAR/VARCHAR(2) openGauss GBK/GB18030 PG/B
oracle AL32UTF8 NCHAR/NVARCHAR2(2) CHAR/VARCHAR(6) openGauss UTF8 A
oracle AL32UTF8 NCHAR/NVARCHAR2(2) CHAR/VARCHAR(2) openGauss UTF8 PG/B
MySQL GBK VARCHAR(100 ) VARCHAR(300) openGauss UTF8 A
MySQL GBK VARCHAR(100 ) VARCHAR(300) openGauss GBK A
MySQL UTF8 VARCHAR(100 ) VARCHAR(300) openGauss UTF8 A
MySQL GBK VARCHAR(100 ) VARCHAR(100) openGauss UTF8 PG/B
MySQL GBK VARCHAR(100 ) VARCHAR(100) openGauss GBK PG/B
MySQL UTF8 VARCHAR(100 ) VARCHAR(100) openGauss UTF8 PG/B
DB2 GBK VARCHAR(100 ) VARCHAR(150) openGauss UTF8 A
DB2 GBK VARCHAR(100 ) VARCHAR(100) openGauss GBK A
DB2 UTF8 VARCHAR(100 ) VARCHAR(100) openGauss UTF8 A
DB2 GBK VARCHAR(100 ) VARCHAR(100) openGauss UTF8 PG/B
DB2 GBK VARCHAR(100 ) VARCHAR(100) openGauss GBK PG/B
DB2 UTF8 VARCHAR(100 ) VARCHAR(100) openGauss UTF8 PG/B
DB2 ZHS16GBK CHAR(1 ) CHAR(1) openGauss UTF8 A
DB2 ZHS16GBK CHAR(2 ) CHAR(3) openGauss UTF8 A
DB2 ZHS16GBK CHAR(1 ) CHAR(1) openGauss UTF8 PG/B
DB2 ZHS16GBK CHAR(2 ) CHAR(2) openGauss UTF8 PG/B
DB2 AL32UTF8 CHAR(1 ) CHAR(1) openGauss UTF8 A
DB2 AL32UTF8 CHAR(2 ) CHAR(2) openGauss UTF8 A
DB2 AL32UTF8 CHAR(1 ) CHAR(1) openGauss UTF8 PG/B
DB2 AL32UTF8 CHAR(2 ) CHAR(2) openGauss UTF8 PG/B

Migrate Oracle ZHS16GBK NCHAR/NVARCHAR to MogDB GBK A mode length change *4 starting from 2.9.8 Because some Chinese characters are encoded in GB18030-2022 and become 4 bytes. However, NCHAR/NVARCHAR will consider 1 character, and the original *2 will cause the length to be insufficient.

Empty String and NULL conversions ?

  • Oracle '' = NULL
  • DB2 '' != NULL
  • MySQL '' != NULL
  • PostgreSQL '' != NULL
  • openGauss
    • Compatibility type A: '' = NULL
    • Compatibility type PG: '' != NULL
    • Compatibility type B: '' != NULL
  • Informix '' != NULL

When the processing method of the target database is inconsistent with the processing method of the source database, MTK will perform data conversion according to the NOT NULL constraint of the column.

If the column has NOT NULL constraint, and there is a '' in the column database, the '' will be converted to " ", and when it is allowed to be NULL, it will be converted to NULL

Chr(0) 0x00 Change ?

Special characters Chr(0)/0x00 can be inserted in DB2/Oracle/MySQL.

However, it cannot be inserted in the postgres class database, and Chr(0) will be replaced with ''.

Time zone Change ?

  • Make sure that the migration server is in the same time zone as the application server.
  • Make sure that the database nodes are in the same time zone.

Slow log explosion during migration ?

When migrating to MySQL, the MySQL slow log exploded. Migrating to MySQL using Insert multiple records may result in an insertion time greater than the long_query_time value and logging to a log file.

It is recommended to turn off full logging or increase the long_query_time during migration. 版本的 package功能 支持存储过程,不支持函数.

Oracle Package Migration Processing Plan ?

See convertPackageMethod

Not define migrate object ?

There are no configuration migration objects. please seeObjects

Performance

MTK performs concurrent tasks to migrate data, and the number of concurrency is controlled by limit.parallel. The concurrency granularity is

  • Table (no partitions/no custom query criteria)
  • Partitioned table single partition/subpartition
  • Custom query conditions (one query condition is a concurrent task)

The total number of concurrent threads is calculated as: limit.parallel(worker worker)+limit.parallel(query thread)+limit.parallel*parallelInsert(write thread). When configuring concurrency, it should be configured according to the CPU running MTK machine, and the configuration is between 50%-70% of the total number of CPUs.

  • Why are there custom query conditions?

The performance of a single query will have a certain performance limit, which is limited by the configuration of the machine where the database is located and the performance of the machine where the MTK is located. This limit may be a few M or tens of M, and the overall migration data is improved by splitting into multiple queries.

  • When do I need to configure custom query conditions?

    • large table and not partitioned table
    • large table and partitioned table, but the data is unevenly distributed, and a large amount of data is in one partition.
  • How to customize the query conditions?

Reference [tablesplit](.. /config/mtk-object.md#tablesplit)

Copyright © 2011-2024 www.enmotech.com All rights reserved.