- About MDB
- Installation
- Upgrade
- Concepts
- User Interface
- Source Requirements
- Target Database Requirements
- Target Kafka Requirements
- Usage Restriction
- Frequently Exceptions
- Release Note
MDB Channel Function
In the MDB system, channel is a logical unit and used for connecting a source node and a target node through a series of operations. Data replication, scheduling, error processing, performance adjustment, and other operations generate in the channel. MDB synchronizes data in the channel from a source data to a target data.
Currently, a channel supports only a source node and a target node, that is, one-to-one data synchronization and replication.
Channel List
Fuzzy search is supported by channel name and node name.
Channel Addition
Add a data synchronization channel, and synchronize data from Oracle to MogDB. Adding a channel does not allow the source node type to be the same as the target node type.
Channel Deletion
If a channel is not in progress, delete it directly. If the channel is in progress, stop the task in the channel and then delete the channel.
Channel Synchronization Pausing And Starting
Support Start and Suspend
Channel Details
Channel details show the channel name, status, number of synchronization objects and tables, node name, capture status, and integrate status
Channel-Configuration
The channel name can be modified.
Channel-Object
Add Object
Select the object to be synchronized to the target node from the source node. If the mapping rule is configured, rule conversion is performed. The case of the target object name is converted based on the specification of the target node by default.
Delete Object
Delete the added synchronization object.
Advanced import
Enter the object name or import the object using a file.
Manual import
Enter the object name of the source node (format: schema.table) to add the object to be synchronized to the target node.
File import
Select a template based on the type, and enter schemaName, objectType, and objectName in the template format. After filling in schemaname and tableName, select the template when importing objects.
- schemaName: source schema name.
- objectType: TABLE、 SEQUENCE、 SYNONYM、 VIEW、 TRIGGER、 PROCEDURE、 FUNCTION、 PACKAGE.
- objectName: source object name.
Import result
View the result of importing objects. Click the number of failed objects to view the cause.
Synchronized Object structure Configuration
Select whether to synchronize the structure of the object.
Synchronized Snapshot Configuration
Enable Full or Skip Full to control whether to synchronize full data of an object.
Synchronized incremental Configuration
Enable or skip increment to control whether to synchronize incremental data of an object.
Activate
Perform one-click synchronization on a single object
Reset
Performs a reset operation on a single object
Detail
- View the object structure synchronization result
- View the table data synchronization result
- View sequence data synchronization
View Mapping
This part shows the source and target field mapping information of the source and target nodes. Only tables can be viewed.
Field Mapping Configuration
This part supports modification of the schema, table name, and field name on the target node. After the modification is saved, perform pre-check, object synchronization, and data synchronization again.
Data fragmentation
Supports automatic and custom sharding
Mapping Rule
Schema Mapping Configuration
- Multiple mapping rules can be configured and performed based on the sequence listed.
- The configuration takes effect immediately after being saved. Object and data synchronization needs to be performed again.
Object Mapping Configuration
- Character replacement: Replace the specified string on the target node with that in the object name of the source node.
- Regular replacement: Replace the specified string on the target node with that conforming to the regular expression in the object name of the source node.
- Multiple mapping rules can be configured and performed based on the sequence listed.
- The configuration takes effect immediately after being saved. Object and data synchronization needs to be performed again.
Channel-Advanced Parameter
In MDB, advanced parameters control the logic of the channel synchronization process. For example, skipping snapshots, synchronizing only incremental data, and overriding the existing objects on the target node.
Parameter | Default | Description |
---|---|---|
compatibleModel | ON | Migration compatibility mode: ON indicates that the compatibility is enabled. OFF indicates that the compatibility is disabled. |
overrideMode | NONE | Override mode NONE_FORCE indicates data is not overridden. NONE is default. FORCE indicates that data is overridden. Note: If FORCE is selected, the target node, such as table, will be deleted first, but the schema will not be deleted |
tableSpaceSwitch | OFF | Whether to ignore the tablespace ON indicates that the tablespace is not ignored. OFF indicates that the tablespace is ignored. |
skipSnapShot | ON | Whether to skip full data ON indicates that full data is skipped. OFF indicates that full data is not skipped. |
heartbeatIntervalMs | 0 | Heartbeat interval (s) 0 indicates that the heartbeat interval is disabled. If the value is a positive integer, the heartbeat interval is enabled. |
heartbeatTableName | heartbeat | Heartbeat table: Create a heartbeat table on the source node. |
skipMigrationObject | OFF | Whether to skip object synchronization |
postgresqlPluginName | decoderbufs | PostgreSQL Plug-in:decoderbufs,wal2json |
topicName | Send topic: The topic sent by the kafka/datahub target node | |
migrationSchema | ON | Whether to migration schema |
migrationIndexConstraint | ON | Whether to migration index and constraint |
skipSpecialTypeTable | ON | Whether to skip special type table object synchronization |
logMiningStartScn | log.mining.start.scn | |
logMiningBatchSizeMin | 1000 | log.mining.batch.size.min |
logMiningBatchSizeMax | 100000 | log.mining.batch.size.max |
logMiningBatchSizeDefault | 20000 | log.mining.batch.size.default |
logMiningViewFetchSize | 10000 | log.mining.view.fetch.size |
logMiningArchiveLogHours | 0 | log.mining.archive.log.hours |
logMiningTransactionRetentionHours | 0 | log.mining.transaction.retention.hours |
skipDebeziumSchema | ON | Whether to reduce and migrate data |
refreshSourceBeforeDataSync | ON | Refresh MDB database before data synchronization (only one refresh) |
decodingPluginName | wal2json | MogDB parallel parsing function plug-in |
continueWithError | OFF | Whether to continue delivery after an exception occurs |
lobEnabled | ON | Whether to synchronize BLOBs and CLOBs |
sequenceUpdateInterval | 1440 | Normal sequence update interval (minutes) |
autoIncUpdateInterval | 1440 | Auto-increment sequence update interval (minutes) |
virtualColumnEnabled | ON | Synchronize virtual data columns |
enableTargetMerge | ON | Whether to enable delivery data merge |
targetMergeRows | 1000 | Specifies the maximum number of controlled data combinations |
targetMergeInterval | 3 | Specifies the maximum data combination interval (s) |
restrictMode | OFF | Strict mode (available only for tables without primary keys at the source) |
migrationStage | Structure synchronization, full migration, incremental synchronization | control migration phase |
targetObjectCharset | utf8mb4 | The target object specifies the character set |
minKeySplitSize | 10240 | Minimum Key fragmentation threshold (M). If a table exceeds this threshold, the automatic Key (logical primary key) partitioning algorithm is considered |
numericDefaultScale | 10 | Default scale value for DECIMAL default is 10) |
autoConvertDataType | OFF | Automatically convert reasonable data types |
autoResolveColMismatch | OFF | Automatically resolves field inconsistency between the source and target ends |
enableBit1toBool | OFF | Migrate MySQL bit(1) to MogDB bool |
enableBlob2Blob | false | Switch of BLOB mapping to BLOB data type |
Channel-Performance monitoring
Channel-Log Synchronization
After the data synchronization operation, the process logs of data extraction and delivery can be downloaded
Channel-Task summary
The synchronization progress, execution time of each phase, and result are displayed in the channel.
Task Synchronization
-
Perform one-key synchronization or perform pre-check, structure synchronization, and data synchronization in sequence.
-
Structure synchronization reset: Reset the structure synchronization operation to delete the structure synchronization progress and records of all tables in the channel
-
Pause data synchronization: Pause data capturing and integrating.
-
Start data synchronization: Restore data synchronization and data capturing and integrating.
-
Data synchronization reset: Reset Data synchronization, the data synchronization progress and records for all tables in the channel are deleted.
-
Suspend capture: suspends the data capture service
-
Suspend integrate: The data integrate service is suspended
-
Start capture: Restore the data capture service
-
Start integrate: Restores the data integrate service
Data check task
-
After data synchronization, start data verification for data verification
-
Data verification details, view the data verification result
Task Failure Viewing
-
View pre-check abnormity
Enter the command to automatically repair or download the repair script to manually execute
-
View the synchronization object abnormity
The SQL on the target node supports editing, formatting, and replicating. After a SQL is edited and saved, perform it again. After successful execution, the abnormity is removed from the list.
-
View data synchronization exceptions
Click the data details list exception to display the failed data details. Discard and retry are supported.