- About MogDB
- MogDB Introduction
- Comparison Between MogDB and openGauss
- MogDB Release Notes
- High Availability and Performance
- Open Source Components
- Usage Limitations
- Terms of Use
- Quick Start
- Installation Guide
- Container Installation
- Simplified Installation Process
- Standard Installation
- Manual Installation
- Administrator Guide
- Routine Maintenance
- Starting and Stopping MogDB
- Using the gsql Client for Connection
- Routine Maintenance
- Checking OS Parameters
- Checking MogDB Health Status
- Checking Database Performance
- Checking and Deleting Logs
- Checking Time Consistency
- Checking The Number of Application Connections
- Routinely Maintaining Tables
- Routinely Recreating an Index
- Data Security Maintenance Suggestions
- Log Reference
- Primary and Standby Management
- MOT Engine
- Introducing MOT
- Using MOT
- Concepts of MOT
- Appendix
- Column-store Tables Management
- Backup and Restoration
- Importing and Exporting Data
- Importing Data
- Exporting Data
- Upgrade Guide
- Common Fault Locating Cases
- Core Fault Locating
- When the TPC-C is running and a disk to be injected is full, the TPC-C stops responding
- Standby Node in the Need Repair (WAL) State
- Insufficient Memory
- Service Startup Failure
- "Error:No space left on device" Is Displayed
- After You Run the du Command to Query Data File Size In the XFS File System, the Query Result Is Greater than the Actual File Size
- File Is Damaged in the XFS File System
- Primary Node Is Hung in Demoting During a Switchover
- Disk Space Usage Reaches the Threshold and the Database Becomes Read-only
- Slow Response to a Query Statement
- Analyzing the Status of a Query Statement
- Forcibly Terminating a Session
- Analyzing Whether a Query Statement Is Blocked
- Low Query Efficiency
- "Lock wait timeout" Is Displayed When a User Executes an SQL Statement
- Table Size Does not Change After VACUUM FULL Is Executed on the Table
- An Error Is Reported When the Table Partition Is Modified
- Different Data Is Displayed for the Same Table Queried By Multiple Users
- When a User Specifies Only an Index Name to Modify the Index, A Message Indicating That the Index Does Not Exist Is Displayed
- Reindexing Fails
- An Error Occurs During Integer Conversion
- "too many clients already" Is Reported or Threads Failed To Be Created in High Concurrency Scenarios
- B-tree Index Faults
- Routine Maintenance
- Security Guide
- Database Security Management
- Performance Tuning
- System Optimization
- SQL Optimization
- WDR Snapshot Schema
- TPCC Performance Tuning Guide
- Developer Guide
- Application Development Guide
- Development Specifications
- Development Based on JDBC
- Overview
- JDBC Package, Driver Class, and Environment Class
- Development Process
- Loading the Driver
- Connecting to a Database
- Connecting to the Database (Using SSL)
- Running SQL Statements
- Processing Data in a Result Set
- Closing a Connection
- Example: Common Operations
- Example: Retrying SQL Queries for Applications
- Example: Importing and Exporting Data Through Local Files
- Example 2: Migrating Data from a MY Database to MogDB
- Example: Logic Replication Code
- JDBC API Reference
- java.sql.Connection
- java.sql.CallableStatement
- java.sql.DatabaseMetaData
- java.sql.Driver
- java.sql.PreparedStatement
- java.sql.ResultSet
- java.sql.ResultSetMetaData
- java.sql.Statement
- javax.sql.ConnectionPoolDataSource
- javax.sql.DataSource
- javax.sql.PooledConnection
- javax.naming.Context
- javax.naming.spi.InitialContextFactory
- CopyManager
- Development Based on ODBC
- Development Based on libpq
- Development Based on libpq
- libpq API Reference
- Database Connection Control Functions
- Database Statement Execution Functions
- Functions for Asynchronous Command Processing
- Functions for Canceling Queries in Progress
- Example
- Connection Characters
- Commissioning
- Appendices
- Stored Procedure
- User Defined Functions
- Autonomous Transaction
- Logical Replication
- Logical Decoding
- Foreign Data Wrapper
- Materialized View
- Materialized View Overview
- Full Materialized View
- Incremental Materialized View
- AI Features
- Overview
- Predictor: AI Query Time Forecasting
- X-Tuner: Parameter Optimization and Diagnosis
- SQLdiag: Slow SQL Discovery
- A-Detection: Status Monitoring
- Index-advisor: Index Recommendation
- DeepSQL
- Application Development Guide
- Reference Guide
- System Catalogs and System Views
- Overview of System Catalogs and System Views
- System Catalogs
- GS_AUDITING_POLICY
- GS_AUDITING_POLICY_ACCESS
- GS_AUDITING_POLICY_FILTERS
- GS_AUDITING_POLICY_PRIVILEGES
- GS_CLIENT_GLOBAL_KEYS
- GS_CLIENT_GLOBAL_KEYS_ARGS
- GS_COLUMN_KEYS
- GS_COLUMN_KEYS_ARGS
- GS_ENCRYPTED_COLUMNS
- GS_MASKING_POLICY
- GS_MASKING_POLICY_ACTIONS
- GS_MASKING_POLICY_FILTERS
- GS_MATVIEW
- GS_MATVIEW_DEPENDENCY
- GS_OPT_MODEL
- GS_POLICY_LABEL
- GS_WLM_INSTANCE_HISTORY
- GS_WLM_OPERATOR_INFO
- GS_WLM_PLAN_ENCODING_TABLE
- GS_WLM_PLAN_OPERATOR_INFO
- GS_WLM_USER_RESOURCE_HISTORY
- PG_AGGREGATE
- PG_AM
- PG_AMOP
- PG_AMPROC
- PG_APP_WORKLOADGROUP_MAPPING
- PG_ATTRDEF
- PG_ATTRIBUTE
- PG_AUTHID
- PG_AUTH_HISTORY
- PG_AUTH_MEMBERS
- PG_CAST
- PG_CLASS
- PG_COLLATION
- PG_CONSTRAINT
- PG_CONVERSION
- PG_DATABASE
- PG_DB_ROLE_SETTING
- PG_DEFAULT_ACL
- PG_DEPEND
- PG_DESCRIPTION
- PG_DIRECTORY
- PG_ENUM
- PG_EXTENSION
- PG_EXTENSION_DATA_SOURCE
- PG_FOREIGN_DATA_WRAPPER
- PG_FOREIGN_SERVER
- PG_FOREIGN_TABLE
- PG_INDEX
- PG_INHERITS
- PG_JOB
- PG_JOB_PROC
- PG_LANGUAGE
- PG_LARGEOBJECT
- PG_LARGEOBJECT_METADATA
- PG_NAMESPACE
- PG_OBJECT
- PG_OPCLASS
- PG_OPERATOR
- PG_OPFAMILY
- PG_PARTITION
- PG_PLTEMPLATE
- PG_PROC
- PG_RANGE
- PG_RESOURCE_POOL
- PG_REWRITE
- PG_RLSPOLICY
- PG_SECLABEL
- PG_SHDEPEND
- PG_SHDESCRIPTION
- PG_SHSECLABEL
- PG_STATISTIC
- PG_STATISTIC_EXT
- PG_SYNONYM
- PG_TABLESPACE
- PG_TRIGGER
- PG_TS_CONFIG
- PG_TS_CONFIG_MAP
- PG_TS_DICT
- PG_TS_PARSER
- PG_TS_TEMPLATE
- PG_TYPE
- PG_USER_MAPPING
- PG_USER_STATUS
- PG_WORKLOAD_GROUP
- PLAN_TABLE_DATA
- STATEMENT_HISTORY
- System Views
- GS_AUDITING
- GS_AUDITING_ACCESS
- GS_AUDITING_PRIVILEGE
- GS_CLUSTER_RESOURCE_INFO
- GS_INSTANCE_TIME
- GS_LABELS
- GS_MASKING
- GS_MATVIEWS
- GS_SESSION_MEMORY
- GS_SESSION_CPU_STATISTICS
- GS_SESSION_MEMORY_CONTEXT
- GS_SESSION_MEMORY_DETAIL
- GS_SESSION_MEMORY_STATISTICS
- GS_SQL_COUNT
- GS_WLM_CGROUP_INFO
- GS_WLM_PLAN_OPERATOR_HISTORY
- GS_WLM_REBUILD_USER_RESOURCE_POOL
- GS_WLM_RESOURCE_POOL
- GS_WLM_USER_INFO
- GS_STAT_SESSION_CU
- GS_TOTAL_MEMORY_DETAIL
- MPP_TABLES
- PG_AVAILABLE_EXTENSION_VERSIONS
- PG_AVAILABLE_EXTENSIONS
- PG_COMM_DELAY
- PG_COMM_RECV_STREAM
- PG_COMM_SEND_STREAM
- PG_COMM_STATUS
- PG_CONTROL_GROUP_CONFIG
- PG_CURSORS
- PG_EXT_STATS
- PG_GET_INVALID_BACKENDS
- PG_GET_SENDERS_CATCHUP_TIME
- PG_GROUP
- PG_GTT_RELSTATS
- PG_GTT_STATS
- PG_GTT_ATTACHED_PIDS
- PG_INDEXES
- PG_LOCKS
- PG_NODE_ENV
- PG_OS_THREADS
- PG_PREPARED_STATEMENTS
- PG_PREPARED_XACTS
- PG_REPLICATION_SLOTS
- PG_RLSPOLICIES
- PG_ROLES
- PG_RULES
- PG_SECLABELS
- PG_SETTINGS
- PG_SHADOW
- PG_STATS
- PG_STAT_ACTIVITY
- PG_STAT_ALL_INDEXES
- PG_STAT_ALL_TABLES
- PG_STAT_BAD_BLOCK
- PG_STAT_BGWRITER
- PG_STAT_DATABASE
- PG_STAT_DATABASE_CONFLICTS
- PG_STAT_USER_FUNCTIONS
- PG_STAT_USER_INDEXES
- PG_STAT_USER_TABLES
- PG_STAT_REPLICATION
- PG_STAT_SYS_INDEXES
- PG_STAT_SYS_TABLES
- PG_STAT_XACT_ALL_TABLES
- PG_STAT_XACT_SYS_TABLES
- PG_STAT_XACT_USER_FUNCTIONS
- PG_STAT_XACT_USER_TABLES
- PG_STATIO_ALL_INDEXES
- PG_STATIO_ALL_SEQUENCES
- PG_STATIO_ALL_TABLES
- PG_STATIO_SYS_INDEXES
- PG_STATIO_SYS_SEQUENCES
- PG_STATIO_SYS_TABLES
- PG_STATIO_USER_INDEXES
- PG_STATIO_USER_SEQUENCES
- PG_STATIO_USER_TABLES
- PG_TABLES
- PG_TDE_INFO
- PG_THREAD_WAIT_STATUS
- PG_TIMEZONE_ABBREVS
- PG_TIMEZONE_NAMES
- PG_TOTAL_MEMORY_DETAIL
- PG_TOTAL_USER_RESOURCE_INFO
- PG_TOTAL_USER_RESOURCE_INFO_OID
- PG_USER
- PG_USER_MAPPINGS
- PG_VARIABLE_INFO
- PG_VIEWS
- PLAN_TABLE
- GS_FILE_STAT
- GS_OS_RUN_INFO
- GS_REDO_STAT
- GS_SESSION_STAT
- GS_SESSION_TIME
- GS_THREAD_MEMORY_CONTEXT
- Functions and Operators
- Logical Operators
- Comparison Operators
- Character Processing Functions and Operators
- Binary String Functions and Operators
- Bit String Functions and Operators
- Mode Matching Operators
- Mathematical Functions and Operators
- Date and Time Processing Functions and Operators
- Type Conversion Functions
- Geometric Functions and Operators
- Network Address Functions and Operators
- Text Search Functions and Operators
- JSON Functions
- HLL Functions and Operators
- SEQUENCE Functions
- Array Functions and Operators
- Range Functions and Operators
- Aggregate Functions
- Window Functions
- Security Functions
- Encrypted Equality Functions
- Set Returning Functions
- Conditional Expression Functions
- System Information Functions
- System Administration Functions
- Statistics Information Functions
- Trigger Functions
- Global Temporary Table Functions
- AI Feature Functions
- Other System Functions
- Internal Functions
- Obsolete Functions
- Supported Data Types
- SQL Syntax
- ABORT
- ALTER AGGREGATE
- ALTER AUDIT POLICY
- ALTER DATABASE
- ALTER DATA SOURCE
- ALTER DEFAULT PRIVILEGES
- ALTER DIRECTORY
- ALTER EXTENSION
- ALTER FOREIGN TABLE
- ALTER FUNCTION
- ALTER GROUP
- ALTER INDEX
- ALTER LANGUAGE
- ALTER LARGE OBJECT
- ALTER MASKING POLICY
- ALTER MATERIALIZED VIEW
- ALTER OPERATOR
- ALTER RESOURCE LABEL
- ALTER ROLE
- ALTER ROW LEVEL SECURITY POLICY
- ALTER RULE
- ALTER SCHEMA
- ALTER SEQUENCE
- ALTER SERVER
- ALTER SESSION
- ALTER SYNONYM
- ALTER SYSTEM KILL SESSION
- ALTER SYSTEM SET
- ALTER TABLE
- ALTER TABLE PARTITION
- ALTER TABLESPACE
- ALTER TEXT SEARCH CONFIGURATION
- ALTER TEXT SEARCH DICTIONARY
- ALTER TRIGGER
- ALTER TYPE
- ALTER USER
- ALTER USER MAPPING
- ALTER VIEW
- ANALYZE | ANALYSE
- BEGIN
- CALL
- CHECKPOINT
- CLOSE
- CLUSTER
- COMMENT
- COMMIT | END
- COMMIT PREPARED
- COPY
- CREATE AGGREGATE
- CREATE AUDIT POLICY
- CREATE CAST
- CREATE CLIENT MASTER KEY
- CREATE COLUMN ENCRYPTION KEY
- CREATE DATABASE
- CREATE DATA SOURCE
- CREATE DIRECTORY
- CREATE EXTENSION
- CREATE FOREIGN TABLE
- CREATE FUNCTION
- CREATE GROUP
- CREATE INCREMENTAL MATERIALIZED VIEW
- CREATE INDEX
- CREATE LANGUAGE
- CREATE MASKING POLICY
- CREATE MATERIALIZED VIEW
- CREATE OPERATOR
- CREATE ROW LEVEL SECURITY POLICY
- CREATE PROCEDURE
- CREATE RESOURCE LABEL
- CREATE ROLE
- CREATE RULE
- CREATE SCHEMA
- CREATE SEQUENCE
- CREATE SERVER
- CREATE SYNONYM
- CREATE TABLE
- CREATE TABLE AS
- CREATE TABLE PARTITION
- CREATE TABLESPACE
- CREATE TEXT SEARCH CONFIGURATION
- CREATE TEXT SEARCH DICTIONARY
- CREATE TRIGGER
- CREATE TYPE
- CREATE USER
- CREATE USER MAPPING
- CREATE VIEW
- CURSOR
- DEALLOCATE
- DECLARE
- DELETE
- DO
- DROP AGGREGATE
- DROP AUDIT POLICY
- DROP CAST
- DROP CLIENT MASTER KEY
- DROP COLUMN ENCRYPTION KEY
- DROP DATABASE
- DROP DATA SOURCE
- DROP DIRECTORY
- DROP EXTENSION
- DROP FOREIGN TABLE
- DROP FUNCTION
- DROP GROUP
- DROP INDEX
- DROP LANGUAGE
- DROP MASKING POLICY
- DROP MATERIALIZED VIEW
- DROP OPERATOR
- DROP OWNED
- DROP RESOURCE LABEL
- DROP ROW LEVEL SECURITY POLICY
- DROP PROCEDURE
- DROP ROLE
- DROP RULE
- DROP SCHEMA
- DROP SEQUENCE
- DROP SERVER
- DROP SYNONYM
- DROP TABLE
- DROP TABLESPACE
- DROP TEXT SEARCH CONFIGURATION
- DROP TEXT SEARCH DICTIONARY
- DROP TRIGGER
- DROP TYPE
- DROP USER
- DROP USER MAPPING
- DROP VIEW
- EXECUTE
- EXPLAIN
- EXPLAIN PLAN
- FETCH
- GRANT
- INSERT
- LOCK
- MOVE
- MERGE INTO
- PREPARE
- PREPARE TRANSACTION
- REASSIGN OWNED
- REFRESH INCREMENTAL MATERIALIZED VIEW
- REFRESH MATERIALIZED VIEW
- REINDEX
- RELEASE SAVEPOINT
- RESET
- REVOKE
- ROLLBACK
- ROLLBACK PREPARED
- ROLLBACK TO SAVEPOINT
- SAVEPOINT
- SELECT
- SELECT INTO
- SET
- SET CONSTRAINTS
- SET ROLE
- SET SESSION AUTHORIZATION
- SET TRANSACTION
- SHOW
- SHUTDOWN
- START TRANSACTION
- TRUNCATE
- UPDATE
- VACUUM
- VALUES
- SQL Reference
- MogDB SQL
- Keywords
- Constant and Macro
- Expressions
- Type Conversion
- Full Text Search
- Introduction
- Tables and Indexes
- Controlling Text Search
- Additional Features
- Parser
- Dictionaries
- Configuration Examples
- Testing and Debugging Text Search
- Limitations
- System Operation
- Controlling Transactions
- DDL Syntax Overview
- DML Syntax Overview
- DCL Syntax Overview
- Appendix
- GUC Parameters
- GUC Parameter Usage
- File Location
- Connection and Authentication
- Resource Consumption
- Parallel Import
- Write Ahead Log
- HA Replication
- Memory Table
- Query Planning
- Error Reporting and Logging
- Alarm Detection
- Statistics During the Database Running
- Load Management
- Automatic Vacuuming
- Default Settings of Client Connection
- Lock Management
- Version and Platform Compatibility
- Faut Tolerance
- Connection Pool Parameters
- MogDB Transaction
- Developer Options
- Auditing
- Upgrade Parameters
- Miscellaneous Parameters
- Wait Events
- Query
- System Performance Snapshot
- Equality Query in a Fully-encrypted Database
- Global Temporary Table
- Scheduled Task
- Thread Pool
- Appendix
- Information Schema
- DBE_PERF
- DBE_PERF Overview
- OS
- Instance
- Memory
- File
- Object
- STAT_USER_TABLES
- SUMMARY_STAT_USER_TABLES
- GLOBAL_STAT_USER_TABLES
- STAT_USER_INDEXES
- SUMMARY_STAT_USER_INDEXES
- GLOBAL_STAT_USER_INDEXES
- STAT_SYS_TABLES
- SUMMARY_STAT_SYS_TABLES
- GLOBAL_STAT_SYS_TABLES
- STAT_SYS_INDEXES
- SUMMARY_STAT_SYS_INDEXES
- GLOBAL_STAT_SYS_INDEXES
- STAT_ALL_TABLES
- SUMMARY_STAT_ALL_TABLES
- GLOBAL_STAT_ALL_TABLES
- STAT_ALL_INDEXES
- SUMMARY_STAT_ALL_INDEXES
- GLOBAL_STAT_ALL_INDEXES
- STAT_DATABASE
- SUMMARY_STAT_DATABASE
- GLOBAL_STAT_DATABASE
- STAT_DATABASE_CONFLICTS
- SUMMARY_STAT_DATABASE_CONFLICTS
- GLOBAL_STAT_DATABASE_CONFLICTS
- STAT_XACT_ALL_TABLES
- SUMMARY_STAT_XACT_ALL_TABLES
- GLOBAL_STAT_XACT_ALL_TABLES
- STAT_XACT_SYS_TABLES
- SUMMARY_STAT_XACT_SYS_TABLES
- GLOBAL_STAT_XACT_SYS_TABLES
- STAT_XACT_USER_TABLES
- SUMMARY_STAT_XACT_USER_TABLES
- GLOBAL_STAT_XACT_USER_TABLES
- STAT_XACT_USER_FUNCTIONS
- SUMMARY_STAT_XACT_USER_FUNCTIONS
- GLOBAL_STAT_XACT_USER_FUNCTIONS
- STAT_BAD_BLOCK
- SUMMARY_STAT_BAD_BLOCK
- GLOBAL_STAT_BAD_BLOCK
- STAT_USER_FUNCTIONS
- SUMMARY_STAT_USER_FUNCTIONS
- GLOBAL_STAT_USER_FUNCTIONS
- Workload
- Session/Thread
- SESSION_STAT
- GLOBAL_SESSION_STAT
- SESSION_TIME
- GLOBAL_SESSION_TIME
- SESSION_MEMORY
- GLOBAL_SESSION_MEMORY
- SESSION_MEMORY_DETAIL
- GLOBAL_SESSION_MEMORY_DETAIL
- SESSION_STAT_ACTIVITY
- GLOBAL_SESSION_STAT_ACTIVITY
- THREAD_WAIT_STATUS
- GLOBAL_THREAD_WAIT_STATUS
- LOCAL_THREADPOOL_STATUS
- GLOBAL_THREADPOOL_STATUS
- SESSION_CPU_RUNTIME
- SESSION_MEMORY_RUNTIME
- STATEMENT_IOSTAT_COMPLEX_RUNTIME
- Transaction
- Query
- STATEMENT
- SUMMARY_STATEMENT
- STATEMENT_COUNT
- GLOBAL_STATEMENT_COUNT
- SUMMARY_STATEMENT_COUNT
- GLOBAL_STATEMENT_COMPLEX_HISTORY
- GLOBAL_STATEMENT_COMPLEX_HISTORY_TABLE
- GLOBAL_STATEMENT_COMPLEX_RUNTIME
- STATEMENT_RESPONSETIME_PERCENTILE
- STATEMENT_USER_COMPLEX_HISTORY
- STATEMENT_COMPLEX_RUNTIME
- STATEMENT_COMPLEX_HISTORY_TABLE
- STATEMENT_COMPLEX_HISTORY
- STATEMENT_WLMSTAT_COMPLEX_RUNTIME
- STATEMENT_HISTORY
- Cache/IO
- STATIO_USER_TABLES
- SUMMARY_STATIO_USER_TABLES
- GLOBAL_STATIO_USER_TABLES
- STATIO_USER_INDEXES
- SUMMARY_STATIO_USER_INDEXES
- GLOBAL_STATIO_USER_INDEXES
- STATIO_USER_SEQUENCES
- SUMMARY_STATIO_USER_SEQUENCES
- GLOBAL_STATIO_USER_SEQUENCES
- STATIO_SYS_TABLES
- SUMMARY_STATIO_SYS_TABLES
- GLOBAL_STATIO_SYS_TABLES
- STATIO_SYS_INDEXES
- SUMMARY_STATIO_SYS_INDEXES
- GLOBAL_STATIO_SYS_INDEXES
- STATIO_SYS_SEQUENCES
- SUMMARY_STATIO_SYS_SEQUENCES
- GLOBAL_STATIO_SYS_SEQUENCES
- STATIO_ALL_TABLES
- SUMMARY_STATIO_ALL_TABLES
- GLOBAL_STATIO_ALL_TABLES
- STATIO_ALL_INDEXES
- SUMMARY_STATIO_ALL_INDEXES
- GLOBAL_STATIO_ALL_INDEXES
- STATIO_ALL_SEQUENCES
- SUMMARY_STATIO_ALL_SEQUENCES
- GLOBAL_STATIO_ALL_SEQUENCES
- GLOBAL_STAT_DB_CU
- GLOBAL_STAT_SESSION_CU
- Utility
- REPLICATION_STAT
- GLOBAL_REPLICATION_STAT
- REPLICATION_SLOTS
- GLOBAL_REPLICATION_SLOTS
- BGWRITER_STAT
- GLOBAL_BGWRITER_STAT
- GLOBAL_CKPT_STATUS
- GLOBAL_DOUBLE_WRITE_STATUS
- GLOBAL_PAGEWRITER_STATUS
- GLOBAL_RECORD_RESET_TIME
- GLOBAL_REDO_STATUS
- GLOBAL_RECOVERY_STATUS
- CLASS_VITAL_INFO
- USER_LOGIN
- SUMMARY_USER_LOGIN
- GLOBAL_GET_BGWRITER_STATUS
- Lock
- Wait Events
- Configuration
- Operator
- Workload Manager
- Global Plancache
- Appendix
- Tool Reference
- Tool Overview
- Client Tool
- Server Tools
- Tools Used in the Internal System
- Error Code Reference
- Description of SQL Error Codes
- Third-Party Library Error Codes
- GAUSS-00001 - GAUSS-00100
- GAUSS-00101 - GAUSS-00200
- GAUSS 00201 - GAUSS 00300
- GAUSS 00301 - GAUSS 00400
- GAUSS 00401 - GAUSS 00500
- GAUSS 00501 - GAUSS 00600
- GAUSS 00601 - GAUSS 00700
- GAUSS 00701 - GAUSS 00800
- GAUSS 00801 - GAUSS 00900
- GAUSS 00901 - GAUSS 01000
- GAUSS 01001 - GAUSS 01100
- GAUSS 01101 - GAUSS 01200
- GAUSS 01201 - GAUSS 01300
- GAUSS 01301 - GAUSS 01400
- GAUSS 01401 - GAUSS 01500
- GAUSS 01501 - GAUSS 01600
- GAUSS 01601 - GAUSS 01700
- GAUSS 01701 - GAUSS 01800
- GAUSS 01801 - GAUSS 01900
- GAUSS 01901 - GAUSS 02000
- GAUSS 02001 - GAUSS 02100
- GAUSS 02101 - GAUSS 02200
- GAUSS 02201 - GAUSS 02300
- GAUSS 02301 - GAUSS 02400
- GAUSS 02401 - GAUSS 02500
- GAUSS 02501 - GAUSS 02600
- GAUSS 02601 - GAUSS 02700
- GAUSS 02701 - GAUSS 02800
- GAUSS 02801 - GAUSS 02900
- GAUSS 02901 - GAUSS 03000
- GAUSS 03001 - GAUSS 03100
- GAUSS 03101 - GAUSS 03200
- GAUSS 03201 - GAUSS 03300
- GAUSS 03301 - GAUSS 03400
- GAUSS 03401 - GAUSS 03500
- GAUSS 03501 - GAUSS 03600
- GAUSS 03601 - GAUSS 03700
- GAUSS 03701 - GAUSS 03800
- GAUSS 03801 - GAUSS 03900
- GAUSS 03901 - GAUSS 04000
- GAUSS 04001 - GAUSS 04100
- GAUSS 04101 - GAUSS 04200
- GAUSS 04201 - GAUSS 04300
- GAUSS 04301 - GAUSS 04400
- GAUSS 04401 - GAUSS 04500
- GAUSS 04501 - GAUSS 04600
- GAUSS 04601 - GAUSS 04700
- GAUSS 04701 - GAUSS 04800
- GAUSS 04801 - GAUSS 04900
- GAUSS 04901 - GAUSS 05000
- GAUSS 05001 - GAUSS 05100
- GAUSS 05101 - GAUSS 05200
- GAUSS 05201 - GAUSS 05300
- GAUSS 05301 - GAUSS 05400
- GAUSS 05401 - GAUSS 05500
- GAUSS 05501 - GAUSS 05600
- GAUSS 05601 - GAUSS 05700
- GAUSS 05701 - GAUSS 05800
- GAUSS 05801 - GAUSS 05900
- GAUSS 05901 - GAUSS 06000
- GAUSS 06001 - GAUSS 06100
- GAUSS 06101 - GAUSS 06200
- GAUSS 06201 - GAUSS 06300
- GAUSS 06301 - GAUSS 06400
- GAUSS 06401 - GAUSS 06500
- GAUSS 06501 - GAUSS 06600
- GAUSS 06601 - GAUSS 06700
- GAUSS 06701 - GAUSS 06800
- GAUSS 06801 - GAUSS 06900
- GAUSS 06901 - GAUSS 07000
- GAUSS 07001 - GAUSS 07100
- GAUSS 07101 - GAUSS 07200
- GAUSS 07201 - GAUSS 07300
- GAUSS 07301 - GAUSS 07400
- GAUSS 07401 - GAUSS 07480
- GAUSS 50000 - GAUSS 50999
- GAUSS 51000 - GAUSS 51999
- GAUSS 52000 - GAUSS 52999
- GAUSS 53000 - GAUSS 53699
- System Catalogs and System Views
- FAQs
- Glossary
Logical Backup and Restoration
gs_dump
Background
gs_dump, provided by MogDB, is used to export database information. You can export a database or its objects (such as schemas, tables, and views). The database can be the default mogdb database or a user-specified database.
gs_dump is executed by OS user omm.
When gs_dump is used to export data, other users can still access (read and write) MogDB databases.
gs_dump can export complete, consistent data. For example, if gs_dump is started to export database A at T1, data of the database at that time point will be exported, and modifications on the database after that time point will not be exported.
gs_dump can export database information to a plain-text SQL script file or archive file.
- Plain-text SQL script: It contains the SQL statements required to restore the database. You can use gsql to execute the SQL script. With only a little modification, the SQL script can rebuild a database on other hosts or database products.
- Archive file: It contains data required to restore the database. It can be a tar-, directory-, or custom-format archive. For details, see Table 1. The export result must be used with gs_restoreto restore the database. The system allows users to select or even to sort the content to be imported.
Functions
gs_dump can create export files in four formats, which are specified by -F or -format=, as listed in Table 1.
Table 1 Formats of exported files
Format | Value of -F | Description | Suggestion | Corresponding Import Tool |
---|---|---|---|---|
Plain-text | p | A plain-text script file containing SQL statements and commands. The commands can be executed on gsql, a command line terminal, to recreate database objects and load table data. | You are advised to use plain-text exported files for small databases. | Before using gsql to restore database objects, you can use a text editor to edit the plain-text export file as required. |
Custom | c | A binary file that allows the restoration of all or selected database objects from an exported file. | You are advised to use custom-format archive files for medium or large database. | You can use gs_restore to import database objects from a custom-format archive. |
Directory | d | A directory containing directory files and the data files of tables and BLOB objects. | - | |
.tar | t | A tar-format archive that allows the restoration of all or selected database objects from an exported file. It cannot be further compressed and has an 8-GB limitation on the size of a single table. | - |
NOTE: To reduce the size of an exported file, you can use gs_dump to compress it to a plain-text file or custom-format file. By default, a plain-text file is not compressed when generated. When a custom-format archive is generated, a medium level of compression is applied by default. Archived exported files cannot be compressed using gs_dump. When a plain-text file is exported in compressed mode, gsql fails to import data objects.
Precautions
Do not modify an exported file or its content. Otherwise, restoration may fail.
To ensure the data consistency and integrity, gs_dump acquires a share lock on a table to be dumped. If another transaction has acquired a share lock on the table, gs_dump waits until this lock is released and then locks the table for dumping. If the table cannot be locked within the specified time, the dump fails. You can customize the timeout duration to wait for lock release by specifying the -lock-wait-timeout parameter.
Syntax
gs_dump [OPTION]... [DBNAME]
NOTE: DBNAME does not follow a short or long option. It specifies the database to be connected. For example: Specify DBNAME without a -d option preceding it.
gs_dump -p port_number mogdb -f dump1.sql
or
export PGDATABASE=mogdb
gs_dump -p port_number -f dump1.sql
Environment variable:PGDATABASE
Parameter Description
Common parameters
-
-f, -file=FILENAME
Sends the output to the specified file or directory. If this parameter is omitted, the standard output is generated. If the output format is (-F c/-F d/-F t), the -f parameter must be specified. If the value of the -f parameter contains a directory, the current user must have the read and write permissions on the directory, and the directory cannot be an existing one.
-
-F, -format=c|d|t|p
Selects the exported file format. The format can be:
-
p|plain: Generates a text SQL script file. This is the default value.
-
c|custom: Outputs a custom-format archive as a directory to be used as the input of gs_restore. This is the most flexible output format in which users can manually select it and reorder the archived items during restoration. An archive in this format is compressed by default.
-
d|directory: Creates a directory containing directory files and the data files of tables and BLOBs.
-
t|tar: Outputs a .tar archive as the input of gs_restore. The .tar format is compatible with the directory format. Extracting a .tar archive generates a valid directory-format archive. However, the .tar archive cannot be further compressed and has an 8-GB limitation on the size of a single table. The order of table data items cannot be changed during restoration.
A .tar archive can be used as input of gsql.
-
-
-v, -verbose
Specifies the verbose mode. If it is specified, gs_dump writes detailed object comments and the number of startups/stops to the dump file, and progress messages to standard error.
-
-V, -version
Prints the gs_dump version and exits.
-
-Z, -compress=0-9
Specifies the used compression level.
Value range: 0-9
-
0 indicates no compression.
-
1 indicates that the compression ratio is the lowest and processing speed the fastest.
-
9 indicates that the compression ratio is the highest and processing speed the slowest.
For the custom-format archive, this option specifies the compression level of a single table data segment. By default, data is compressed at a medium level. The plain-text and .tar archive formats do not support compression currently.
-
-
-lock-wait-timeout=TIMEOUT
Do not keep waiting to obtain shared table locks since the beginning of the dump. Consider it as failed if you are unable to lock a table within the specified time. The timeout period can be specified in any of the formats accepted by SET statement_timeout.
-
-?, -help
Displays help about gs_dump parameters and exits.
Dump parameters:
-
-a, -data-only
Generates only the data, not the schema (data definition). Dump the table data, big objects, and sequence values.
-
-b, -blobs
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-c, -clean
Before writing the command of creating database objects into the backup file, writes the command of clearing (deleting) database objects to the backup files. (If no objects exist in the target database, gs_restore probably displays some error information.)
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
-
-C, -create
The backup file content starts with the commands of creating the database and connecting to the created database. (If the command script is executed in this mode, you can specify any database to run the command for creating a database. The data is restored to the created database instead of the specified database.)
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
-
-E, -encoding=ENCODING
Creates a dump file in the specified character set encoding. By default, the dump file is created in the database encoding. (Alternatively, you can set the environment variable PGCLIENTENCODING to the required dump encoding.)
-
-n, -schema=SCHEMA
Dumps only schemas matching the schema names. This option contains the schema and all its contained objects. If this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by specifying multiple -n options. The schema parameter is interpreted as a pattern according to the same rules used by the \d command of gsql. Therefore, multiple schemas can also be selected by writing wildcard characters in the pattern. When you use wildcard characters, quote the pattern to prevent the shell from expanding the wildcard characters.
NOTE:
- If -n is specified, gs_dump does not dump any other database objects which the selected schemas might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be automatically restored to an empty database.
- If -n is specified, the non-schema objects are not dumped.
Multiple schemas can be dumped. Entering -n schemaname multiple times dumps multiple schemas.
For example:
gs_dump -h host_name -p port_number mogdb -f backup/bkp_shl2.sql -n sch1 -n sch2
In the preceding example, sch1 and sch2 are dumped.
-
-N, -exclude-schema=SCHEMA
Does not dump any schemas matching the schemas pattern. The pattern is interpreted according to the same rules as for -n. -N can be specified multiple times to exclude schemas matching any of the specified patterns.
When both -n and -N are specified, the schemas that match at least one -n option but no -N is dumped. If -N is specified and -n is not, the schemas matching -N are excluded from what is normally dumped.
Dump allows you to exclude multiple schemas during dumping.
Specify -N exclude schema name to exclude multiple schemas during dumping.
For example:
gs_dump -h host_name -p port_number mogdb -f backup/bkp_shl2.sql -N sch1 -N sch2
In the preceding example, sch1 and sch2 will be excluded during the dumping.
-
-o, -oids
Dumps object identifiers (OIDs) as parts of the data in each table. Use this option if your application references the OID columns in some way. If the preceding situation does not occur, do not use this parameter.
-
-O, -no-owner
Do not output commands to set ownership of objects to match the original database. By default, gs_dump issues the ALTER OWNER or SET SESSION AUTHORIZATION statement to set ownership of created database objects. These statements will fail when the script is running unless it is started by a system administrator (or the same user that owns all of the objects in the script). To make a script that can be stored by any user and give the user ownership of all objects, specify -O.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
-
-s, -schema-only
Dumps only the object definition (schema) but not data.
-
-S, -sysadmin=NAME
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-t, -table=TABLE
Specifies a list of tables, views, sequences, or foreign tables to be dumped. You can use multiple -t parameters or wildcard characters to specify tables.
When you use wildcard characters, quote patterns to prevent the shell from expanding the wildcard characters.
The -n and -N options have no effect when -t is used, because tables selected by using -t will be dumped regardless of those options.
NOTE:
- The number of -t parameters must be less than or equal to 100.
- If the number of -t parameters is greater than 100, you are advised to use the -include-table-file parameter to replace some -t parameters.
- If -t is specified, gs_dump does not dump any other database objects which the selected tables might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be automatically restored to an empty database.
- -t tablename only dumps visible tables in the default search path. -t '*.tablename' dumps tablename tables in all the schemas of the dumped database. -t schema.table dumps tables in a specific schema.
- -t tablename does not export trigger information from a table.
For example:
gs_dump -h host_name -p port_number mogdb -f backup/bkp_shl2.sql -t schema1.table1 -t schema2.table2
In the preceding example, schema1.table1 and schema2.table2 are dumped.
-
-include-table-file=FILENAME
Specifies the table file to be dumped.
-
-T, -exclude-table=TABLE
Specifies a list of tables, views, sequences, or foreign tables not to be dumped. You can use multiple -T parameters or wildcard characters to specify tables.
When -t and -T are input, the object will be stored in -t list not -T table object.
For example:
gs_dump -h host_name -p port_number mogdb -f backup/bkp_shl2.sql -T table1 -T table2
In the preceding example, table1 and table2 are excluded from the dumping.
-
-exclude-table-file=FILENAME
Specifies the table files that do not need to be dumped.
NOTE: Same as -include-table-file, the content format of this parameter is as follows: schema1.table1 schema2.table2 ……
-
-x, -no-privileges|-no-acl
Prevents the dumping of access permissions (grant/revoke commands).
-
-binary-upgrade
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-binary-upgrade-usermap="USER1=USER2"
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-column-inserts|-attribute-inserts
Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, …) VALUES …}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
-
-disable-dollar-quoting
Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
-
-disable-triggers
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-exclude-table-data=TABLE
Does not dump data that matches any of table patterns. The pattern is interpreted according to the same rules as for -t.
-exclude-table-data can be entered more than once to exclude tables matching any of several patterns. When you need the specified table definition rather than data in the table, this option is helpful.
To exclude data of all tables in the database, see -schema-only.
-
-inserts
Dumps data by the INSERT statement (rather than COPY). This will cause a slow restoration.
However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content. The restoration may fail if you rearrange the column order. The -column-inserts option is unaffected against column order changes, though even slower.
-
-no-security-labels
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-no-tablespaces
Does not issue commands to select tablespaces. All the objects will be created during restoration, no matter which tablespace is selected when using this option.
This parameter is used only for the plain-text format. For the archive format, you can specify the option when using gs_restore.
-
-no-unlogged-table-data
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-non-lock-table
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-include-alter-table
Dumps deleted columns of tables. This option records deleted columns.
-
-quote-all-identifiers
Forcibly quotes all identifiers. This parameter is useful when you dump a database for migration to a later version, in which additional keywords may be introduced.
-
-section=SECTION
Specifies dumped name sections (pre-data, data, or post-data).
-
-serializable-deferrable
Uses a serializable transaction for the dump to ensure that the used snapshot is consistent with later database status. Perform this operation at a time point in the transaction flow, at which everything is normal. This ensures successful transaction and avoids serialization failures of other transactions, which requires serialization again.
This option has no benefits for disaster recovery. During the upgrade of the original database, loading a database as a report or loading other shared read-only dump is helpful. If the option does not exist, dump reveals a status which is different from the submitted sequence status of any transaction.
This option will make no difference if there are no active read-write transactions when gs_dump is started. If the read-write transactions are in active status, the dump start time will be delayed for an uncertain period.
-
-use-set-session-authorization
Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is returned to ensure the object ownership. This makes dumping more standard. However, if a dump file contains objects that have historical problems, restoration may fail. A dump using SET SESSION AUTHORIZATION requires the system administrator permissions, whereas ALTER OWNER requires lower permissions.
-
-with-encryption=AES128
Specifies that dumping data needs to be encrypted using AES128.
-
-with-key=KEY
Specifies that the key length of AES128 must be 16 bytes.
NOTE:
When using the gs_dump tool for encrypted export, only plain format export is supported. The data exported through -F plain needs to be imported through the gsql tool, and if it is imported through encryption, the -with-key parameter must be specified when importing through gsql.
-
-include-depend-objs
Includes information about the objects that depend on the specified object in the backup result. This parameter takes effect only if the -t or -include-table-file parameter is specified.
-
-exclude-self
Excludes information about the specified object from the backup result. This parameter takes effect only if the -t or -include-table-file parameter is specified.
-
-dont-overwrite-file
The existing files in plain-text, .tar, and custom formats will be overwritten. This option is not used for the directory format.
For example:
Assume that the backup.sql file exists in the current directory. If you specify -f backup.sql in the input command, and the backup.sql file is generated in the current directory, the original file will be overwritten.
If the backup file already exists and -dont-overwrite-file is specified, an error will be reported with the message that the dump file exists.
gs_dump -p port_number mogdb -f backup.sql -F plain --dont-overwrite-file
NOTE:
- The -s/-schema-only and -a/-data-only parameters do not coexist.
- The -c/-clean and -a/-data-only parameters do not coexist.
- -inserts/-column-inserts and -o/-oids do not coexist, because OIDS cannot be set using the INSERT statement.
- -role must be used in conjunction with -rolepassword.
- -binary-upgrade-usermap must be used in conjunction with -binary-upgrade.
- -include-depend-objs or -exclude-self takes effect only when -t or -include-table-file is specified.
- -exclude-self must be used in conjunction with -include-depend-objs.
Connection parameters:
-
-h, -host=HOSTNAME
Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default value is taken from the PGHOST environment variable (if available). Otherwise, a Unix domain socket connection is attempted.
This parameter is used only for defining names of the hosts outside MogDB. The names of the hosts inside MogDB must be 127.0.0.1.
Example:host name
Environment variable:PGHOST
-
-p, -port=PORT
Specifies the host port number. If the thread pool function is enabled, you are advised to use pooler port, that is, the host port number plus 1.
Environment variable:PGPORT
-
-U, -username=NAME
Specifies the username of the host to be connected.
If the username of the host to be connected is not specified, the system administrator is used by default.
Environment variable:PGUSER
-
-w, -no-password
Never issues a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
-
-W, -password=PASSWORD
Specifies the user password for connection. If the host uses the trust authentication policy, the administrator does not need to enter the -W option. If the -W option is not provided and you are not a system administrator, the Dump Restore tool will ask you to enter a password.
-
-role=ROLENAME
Specifies a role name to be used for creating the dump. If this option is selected, the SET ROLE command will be issued after the database is connected to gs_dump. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_dump. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a super administrator. This option allows dumping data without violating the policy.
-
-rolepassword=ROLEPASSWORD
Specifies the password for a role.
NOTE:
If any local additions need to be added to the template1 database in MogDB, restore the output of gs_dump into an empty database with caution. Otherwise, you are likely to obtain errors due to duplicate definitions of the added objects. To create an empty database without any local additions, data from template0 rather than template1. Example:
CREATE DATABASE foo WITH TEMPLATE template0;
The .tar file size must be smaller than 8 GB. (This is the .tar file format limitations.) The total size of a .tar archive and any of the other output formats are not limited, except possibly by the OS.
The dump file generated by gs_dump does not contain the statistics used by the optimizer to make execution plans. Therefore, you are advised to run ANALYZE after restoring from a dump file to ensure optimal performance. The dump file does not contain any ALTER DATABASE … SET commands. These settings are dumped by gs_dumpall, along with database users and other installation settings.
Examples
Use gs_dump to dump a database as a SQL text file or a file in other formats.
In the following examples, Bigdata@123 indicates the password for the database user. backup/MPPDB_backup.sql indicates an exported file where backup indicates the relative path of the current directory. 37300 indicates the port number of the database server. mogdb indicates the name of the database to be accessed.
NOTE: Before exporting files, ensure that the directory exists and you have the read and write permissions on the directory.
Example 1: Use gs_dump to export the full information of the mogdb database. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -W Bigdata@123 -f backup/MPPDB_backup.sql -p 37300 mogdb -F p
gs_dump[port='37300'][mogdb][2018-06-27 09:49:17]: The total objects number is 356.
gs_dump[port='37300'][mogdb][2018-06-27 09:49:17]: [100.00%] 356 objects have been dumped.
gs_dump[port='37300'][mogdb][2018-06-27 09:49:17]: dump database mogdb successfully
gs_dump[port='37300'][mogdb][2018-06-27 09:49:17]: total time: 1274 ms
Use gsql to import data from the exported plain-text file.
Example 2: Use gs_dump to export the full information of the mogdb database. The exported MPPDB_backup.tar file is in .tar format.
gs_dump -U omm -W Bigdata@123 -f backup/MPPDB_backup.tar -p 37300 mogdb -F t
gs_dump[port='37300'][mogdb][2018-06-27 10:02:24]: The total objects number is 1369.
gs_dump[port='37300'][mogdb][2018-06-27 10:02:53]: [100.00%] 1369 objects have been dumped.
gs_dump[port='37300'][mogdb][2018-06-27 10:02:53]: dump database mogdb successfully
gs_dump[port='37300'][mogdb][2018-06-27 10:02:53]: total time: 50086 ms
Example 3: Use gs_dump to export the full information of the mogdb database. The exported MPPDB_backup.dmp file is in custom format.
gs_dump -U omm -W Bigdata@123 -f backup/MPPDB_backup.dmp -p 37300 mogdb -F c
gs_dump[port='37300'][mogdb][2018-06-27 10:05:40]: The total objects number is 1369.
gs_dump[port='37300'][mogdb][2018-06-27 10:06:03]: [100.00%] 1369 objects have been dumped.
gs_dump[port='37300'][mogdb][2018-06-27 10:06:03]: dump database mogdb successfully
gs_dump[port='37300'][mogdb][2018-06-27 10:06:03]: total time: 36620 ms
Example 4: Use gs_dump to export the full information of the mogdb database. The exported MPPDB_backup file is in directory format.
gs_dump -U omm -W Bigdata@123 -f backup/MPPDB_backup -p 37300 mogdb -F d
gs_dump[port='37300'][mogdb][2018-06-27 10:16:04]: The total objects number is 1369.
gs_dump[port='37300'][mogdb][2018-06-27 10:16:23]: [100.00%] 1369 objects have been dumped.
gs_dump[port='37300'][mogdb][2018-06-27 10:16:23]: dump database mogdb successfully
gs_dump[port='37300'][mogdb][2018-06-27 10:16:23]: total time: 33977 ms
Example 5: Use gs_dump to export the information of the mogdb database, excluding the information of the table specified in the /home/MPPDB_temp.sql file. The exported MPPDB_backup.sql file is in plain-text format.
gs_dump -U omm -W Bigdata@123 -p 37300 mogdb --exclude-table-file=/home/MPPDB_temp.sql -f backup/MPPDB_backup.sql
gs_dump[port='37300'][mogdb][2018-06-27 10:37:01]: The total objects number is 1367.
gs_dump[port='37300'][mogdb][2018-06-27 10:37:22]: [100.00%] 1367 objects have been dumped.
gs_dump[port='37300'][mogdb][2018-06-27 10:37:22]: dump database mogdb successfully
gs_dump[port='37300'][mogdb][2018-06-27 10:37:22]: total time: 37017 ms
Example 6: Use gs_dump to export only the information about the views that depend on the testtable table. Create another testtable table, and then restore the views that depend on it.
-
Back up only the views that depend on the testtable table.
gs_dump -s -p 37300 mogdb -t PUBLIC.testtable --include-depend-objs --exclude-self -f backup/MPPDB_backup.sql -F p gs_dump[port='37300'][mogdb][2018-06-15 14:12:54]: The total objects number is 331. gs_dump[port='37300'][mogdb][2018-06-15 14:12:54]: [100.00%] 331 objects have been dumped. gs_dump[port='37300'][mogdb][2018-06-15 14:12:54]: dump database mogdb successfully gs_dump[port='37300'][mogdb][2018-06-15 14:12:54]: total time: 327 ms
-
Change the name of the testtable table.
gsql -p 37300 mogdb -r -c "ALTER TABLE PUBLIC.testtable RENAME TO testtable_bak;"
-
Create another testtable table.
CREATE TABLE PUBLIC.testtable(a int, b int, c int);
-
Restore the views for the new testtable table.
gsql -p 37300 mogdb -r -f backup/MPPDB_backup.sql
gs_dumpall
Background
gs_dumpall, provided by MogDB, is used to export all MogDB database information, including data of the default database mogdb, user-defined databases, and common global objects of all MogDB databases.
gs_dumpall is executed by OS user omm.
When gs_dumpall is used to export data, other users can still access (read and write) MogDB databases.
gs_dumpall can export complete, consistent data. For example, if gs_dumpall is started to export MogDB database at T1, data of the database at that time point will be exported, and modifications on the database after that time point will not be exported.
gs_dumpall exports all MogDB databases in two parts:
- gs_dumpall exports all global objects, including information about database users and groups, tablespaces, and attributes (for example, global access permissions).
- gs_dumpall invokes gs_dump to export SQL scripts from each MogDB database, which contain all the SQL statements required to restore databases.
The exported files are both plain-text SQL scripts. Use gsql to execute them to restore MogDB databases.
Precautions
- Do not modify an exported file or its content. Otherwise, restoration may fail.
- To ensure the data consistency and integrity, gs_dumpall acquires a share lock on a table to be dumped. If another transaction has acquired a share lock on the table, gs_dumpall waits until this lock is released and then locks the table for dumping. If the table cannot be locked within the specified time, the dump fails. You can customize the timeout duration to wait for lock release by specifying the -lock-wait-timeout parameter.
- During an export, gs_dumpall reads all tables in a database. Therefore, you need to connect to the database as an MogDB administrator to export a complete file. When you use gsql to execute SQL scripts, cluster administrator permissions are also required to add users and user groups, and create databases.
Syntax
gs_dumpall [OPTION]...
Parameter Description
Common parameters:
-
-f, -filename=FILENAME
Sends the output to the specified file. If this parameter is omitted, the standard output is generated.
-
-v, -verbose
Specifies the verbose mode. If it is specified, gs_dumpall writes detailed object comments and number of startups/stops to the dump file, and progress messages to standard error.
-
-V, -version
Prints the gs_dumpall version and exits.
-
-lock-wait-timeout=TIMEOUT
Do not keep waiting to obtain shared table locks at the beginning of the dump. Consider it as failed if you are unable to lock a table within the specified time. The timeout period can be specified in any of the formats accepted by SET statement_timeout.
-
-?, -help
Displays help about gs_dumpall parameters and exits.
Dump parameters:
-
-a, -data-only
Dumps only the data, not the schema (data definition).
-
-c, -clean
Runs SQL statements to delete databases before rebuilding them. Statements for dumping roles and tablespaces are added.
-
-g, -globals-only
Dumps only global objects (roles and tablespaces) but no databases.
-
-o, -oids
Dumps object identifiers (OIDs) as parts of the data in each table. Use this parameter if your application references the OID columns in some way. If the preceding situation does not occur, do not use this parameter.
-
-O, -no-owner
Do not output commands to set ownership of objects to match the original database. By default, gs_dumpall issues the ALTER OWNER or SET SESSION AUTHORIZATION command to set ownership of created schema objects. These statements will fail when the script is running unless it is started by a system administrator (or the same user that owns all of the objects in the script). To make a script that can be stored by any user and give the user ownership of all objects, specify -O.
-
-r, -roles-only
Dumps only roles but not databases or tablespaces.
-
-s, -schema-only
Dumps only the object definition (schema) but not data.
-
-S, -sysadmin=NAME
Name of the system administrator during the dump.
-
-t, -tablespaces-only
Dumps only tablespaces but not databases or roles.
-
-x, -no-privileges
Prevents the dumping of access permissions (grant/revoke commands).
-
-column-inserts|-attribute-inserts
Exports data by running the INSERT command with explicit column names {INSERT INTO table (column, …) VALUES …}. This will cause a slow restoration. However, since this option generates an independent command for each row, an error in reloading a row causes only the loss of the row rather than the entire table content.
-
-disable-dollar-quoting
Disables the use of dollar sign ($) for function bodies, and forces them to be quoted using the SQL standard string syntax.
-
-disable-triggers
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-inserts
Dumps data by the INSERT statement (rather than COPY). This will cause a slow restoration. The restoration may fail if you rearrange the column order. The -column-inserts option is unaffected against column order changes, though even slower.
-
-no-security-labels
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-no-tablespaces
Does not generate output statements to create tablespaces or select tablespaces for objects. All the objects will be created during the restoration process, no matter which tablespace is selected when using this option.
-
-no-unlogged-table-data
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-quote-all-identifiers
Forcibly quotes all identifiers. This parameter is useful when you dump a database for migration to a later version, in which additional keywords may be introduced.
-
-dont-overwrite-file
Does not overwrite the current file.
-
-use-set-session-authorization
Specifies that the standard SQL SET SESSION AUTHORIZATION command rather than ALTER OWNER is returned to ensure the object ownership. This makes dumping more standard. However, if a dump file contains objects that have historical problems, restoration may fail. A dump using SET SESSION AUTHORIZATION requires the system administrator rights, whereas ALTER OWNER requires lower permissions.
-
-with-encryption=AES128
Specifies that dumping data needs to be encrypted using AES128.
-
-with-key=KEY
Specifies that the key length of AES128 must be 16 bytes.
-
-include-templatedb
Includes template databases during the dump.
-
-binary-upgrade
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-binary-upgrade-usermap="USER1=USER2"
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-tablespaces-postfix
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-parallel-jobs
Specifies the number of concurrent backup processes. The value range is 1-1000.
NOTE:
- The -g/-globals-only and -r/-roles-only parameters do not coexist.
- The -g/-globals-only and -t/-tablespaces-only parameters do not coexist.
- The -r/-roles-only and -t/-tablespaces-only parameters do not coexist.
- The -s/-schema-only and -a/-data-only parameters do not coexist.
- The -r/-roles-only and -a/-data-only parameters do not coexist.
- The -t/-tablespaces-only and -a/-data-only parameters do not coexist.
- The -g/-globals-only and -a/-data-only parameters do not coexist.
- -tablespaces-postfix must be used in conjunction with -binary-upgrade.
- -binary-upgrade-usermap must be used in conjunction with -binary-upgrade.
- -parallel-jobs must be used in conjunction with -f/-file.
Connection parameters:
-
-h, -host
Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default value is taken from the PGHOST environment (if variable). Otherwise, a Unix domain socket connection is attempted.
This parameter is used only for defining names of the hosts outside MogDB. The names of the hosts inside MogDB must be 127.0.0.1.
Environment Variable:PGHOST
-
-l, -database
Specifies the name of the database connected to dump all objects and discover other databases to be dumped. If this parameter is not specified, the mogdb database will be used. If the mogdb database does not exist, template1 will be used.
-
-p, -port
TCP port or the local Unix-domain socket file extension on which the server is listening for connections. The default value is the PGPORT environment variable.
If the thread pool function is enabled, you are advised to use pooler port, that is, the listening port number plus 1.
Environment variable:PGPORT
-
-U, -username
Specifies the user name to connect to.
Environment variable:PGUSER
-
-w, -no-password
Never issues a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
-
-W, -password
Specifies the user password for connection. If the host uses the trust authentication policy, the administrator does not need to enter the -W option. If the -W option is not provided and you are not a system administrator, the Dump Restore tool will ask you to enter a password.
-
-role
Specifies a role name to be used for creating the dump. This option causes gs_dumpall to issue the SET ROLE statement after connecting to the database. It is useful when the authenticated user (specified by -U) lacks the permissions required by the gs_dumpall. It allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as a system administrator. This option allows dumping data without violating the policy.
-
-rolepassword
Specifies the password of the specific role.
Notice
gs_dumpall internally invokes gs_dump. For details about the diagnosis information, see gs_dump.
Once gs_dumpall is restored, run ANALYZE on each database so that the optimizer can provide useful statistics.
gs_dumpall requires all needed tablespace directories to exit before the restoration. Otherwise, database creation will fail if the databases are in non-default locations.
Examples
Use gs_dumpall to export all MogDB databases at a time.
NOTE: gs_dumpall supports only plain-text format export. Therefore, only gsql can be used to restore a file exported using gs_dumpall.
gs_dumpall -f backup/bkp2.sql -p 37300
gs_dump[port='37300'][dbname='mogdb'][2018-06-27 09:55:09]: The total objects number is 2371.
gs_dump[port='37300'][dbname='mogdb'][2018-06-27 09:55:35]: [100.00%] 2371 objects have been dumped.
gs_dump[port='37300'][dbname='mogdb'][2018-06-27 09:55:46]: dump database dbname='mogdb' successfully
gs_dump[port='37300'][dbname='mogdb'][2018-06-27 09:55:46]: total time: 55567 ms
gs_dumpall[port='37300'][2018-06-27 09:55:46]: dumpall operation successful
gs_dumpall[port='37300'][2018-06-27 09:55:46]: total time: 56088 ms
gs_restore
Background
gs_restore, provided by MogDB, is used to import data that was exported using gs_dump. It can also be used to import files exported by gs_dump.
gs_restore is executed by OS user omm.
It has the following functions:
-
Importing data to the database
If a database is specified, data is imported to the database. For parallel import, the password for connecting to the database is required.
-
Importing data to the script file
If the database storing imported data is not specified, a script containing the SQL statement to recreate the database is created and written to a file or standard output. This script output is equivalent to the plain text output format of gs_dump.
Command Format
gs_restore [OPTION]... FILE
NOTE:
- FILE does not have a short or long parameter. It is used to specify the location for the archive files.
- The dbname or -l parameter is required as prerequisites. Users cannot enter dbname and -l parameters at the same time.
- gs_restore incrementally imports data by default. To prevent data exceptions caused by multiple import operations, you are advised to use the -c parameter during the import. Before recreating database objects, delete the database objects that already exist in the database to be restored.
- There is no option to control log printing. To hide logs, redirect the logs to the log file. If a large amount of table data needs to be restored, the table data will be restored in batches. Therefore, the log indicating that the table data has been imported is generated for multiple times.
Parameter Description
Common parameters
-
-d, -dbname=NAME
Connects to the dbname database and imports data to the database.
-
-f, -file=FILENAME
Specifies the output file for the generated script, or uses the output file in the list specified using -l.
The default is the standard output.
NOTE: -f cannot be used in conjunction with -d.
-
-F, -format=c|d|t
Specifies the format of the archive. The format does not need to be specified because the gs_restore determines the format automatically.
Value range:
- c/custom: The archive form is the customized format in gs_dump.
- d/directory: The archive form is a directory archive format.
- t/tar: The archive form is a .tar archive format.
-
-l, -list
Lists the forms of the archive. The operation output can be used for the input of the -L parameter. If filtering parameters, such as -n or -t, are used together with -l, they will restrict the listed items.
-
-v, -verbose
Specifies the verbose mode.
-
-V, -version
Prints the gs_restore version and exits.
-
-?, -help
Displays help information about the parameters of gs_restore and exits.
Parameters for importing data
-
-a, -data-only
Imports only the data, not the schema (data definition). gs_restore incrementally imports data.
-
-c, -clean
Cleans (deletes) existing database objects in the database to be restored before recreating them.
-
-C, -create
Creates the database before importing data to it. (When this parameter is used, the database specified by -d is used to issue the initial CREATE DATABASE command. All data is imported to the created database.)
-
-e, -exit-on-error
Exits if an error occurs when you send the SQL statement to the database. If you do not exit, the commands will still be sent and error information will be displayed when the import ends.
-
-I, -index=NAME
Imports only the definition of the specified index. Multiple indexes can be imported. Enter -I index multiple times to import multiple indexes.
For example:
gs_restore -h host_name -p port_number -d mogdb -I Index1 -I Index2 backup/MPPDB_backup.tar
In this example, Index1 and Index2 will be imported.
-
-j, -jobs=NUM
Specifies the number of concurrent, the most time-consuming jobs of gs_restore (such as loading data, creating indexes, or creating constraints). This parameter can greatly reduce the time to import a large database to a server running on a multiprocessor machine.
Each job is one process or one thread, depending on the OS; and uses a separate connection to the server.
The optimal value for this option depends on the server hardware setting, the client, the network, the number of CPU cores, and disk settings. It is recommended that the parameter be set to the number of CPU cores on the server. In addition, a larger value can also lead to faster import in many cases. However, an overly large value will lead to decreased performance because of thrashing.
This parameter supports custom-format archives only. The input file must be a regular file (not the pipe file). This parameter can be ignored when you select the script method rather than connect to a database server. In addition, multiple jobs cannot be used in conjunction with the -single-transaction parameter.
-
-L, -use-list=FILENAME
Imports only archive elements that are listed in list-file and imports them in the order that they appear in the file. If filtering parameters, such as -n or -t, are used in conjunction with -L, they will further limit the items to be imported.
list-file is normally created by editing the output of a previous -l parameter. File lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the beginning of the row.
-
-n, -schema=NAME
Restores only objects that are listed in schemas.
This parameter can be used in conjunction with the -t parameter to import a specific table.
Entering -n schemaname multiple times can import multiple schemas.
For example:
gs_restore -h host_name -p port_number -d mogdb -n sch1 -n sch2 backup/MPPDB_backup.tar
In this example, sch1 and sch2 will be imported.
-
-O, -no-owner
Do not output commands to set ownership of objects to match the original database. By default, gs_restore issues the ALTER OWNER or SET SESSION AUTHORIZATION statement to set ownership of created schema elements. Unless the system administrator or the user who has all the objects in the script initially accesses the database. Otherwise, the statement will fail. Any user name can be used for the initial connection using -O, and this user will own all the created objects.
-
-P, -function=NAME(args)
Imports only listed functions. You need to correctly spell the function name and the parameter based on the contents of the dump file in which the function exists.
Entering -P alone means importing all function-name(args) functions in a file. Entering -P with -n means importing the function-name(args) functions in a specified schema. Entering -P multiple times and using -n once means that all imported functions are in the -n schema by default.
You can enter -n schema-name -P 'function-name(args)' multiple times to import functions in specified schemas.
For example:
gs_restore -h host_name -p port_number -d mogdb -n test1 -P 'Func1(integer)' -n test2 -P 'Func2(integer)' backup/MPPDB_backup.tar
In this example, both Func1 (i integer) in the test1 schema and Func2 (j integer) in the test2 schema will be imported.
-
-s, -schema-only
Imports only schemas (data definitions), instead of data (table content). The current sequence value will not be imported.
-
-S, -sysadmin=NAME
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-t, -table=NAME
Imports only listed table definitions or data, or both. This parameter can be used in conjunction with the -n parameter to specify a table object in a schema. When -n is not entered, the default schema is PUBLIC. Entering -n schemaname -t tablename multiple times can import multiple tables in a specified schema.
For example:
Import table1 in the PUBLIC schema.
gs_restore -h host_name -p port_number -d mogdb -t table1 backup/MPPDB_backup.tar
Import test1 in the test1 schema and test2 in the test2 schema.
gs_restore -h host_name -p port_number -d mogdb -n test1 -t test1 -n test2 -t test2 backup/MPPDB_backup.tar
Import table1 in the PUBLIC schema and test1 in the test1 schema.
gs_restore -h host_name -p port_number -d mogdb -n PUBLIC -t table1 -n test1 -t table1 backup/MPPDB_backup.tar
-
-T, -trigger=NAME
This parameter is reserved for extension.
-
-x, -no-privileges/-no-acl
Prevents the import of access permissions (GRANT/REVOKE commands).
-
-1, -single-transaction
Executes import as a single transaction (that is, commands are wrapped in BEGIN/COMMIT).
This parameter ensures that either all the commands are completed successfully or no application is changed. This parameter means -exit-on-error.
-
-disable-triggers
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-no-data-for-failed-tables
By default, table data will be imported even if the statement to create a table fails (for example, the table already exists). Data in such table is skipped using this parameter. This operation is useful if the target database already contains the desired table contents.
This parameter takes effect only when you import data directly into a database, not when you output SQL scripts.
-
-no-security-labels
Specifies a reserved port for function expansion. This parameter is not recommended.
-
-no-tablespaces
Tablespaces excluding specified ones All objects will be created during the import process no matter which tablespace is selected when using this option.
-
-section=SECTION
Imports the listed sections (such as pre-data, data, or post-data).
-
-use-set-session-authorization
Is used for plain-text backup.
Outputs the SET SESSION AUTHORIZATION statement instead of the ALTER OWNER statement to determine object ownership. This parameter makes dump more standards-compatible. If the records of objects in exported files are referenced, import may fail. Only administrators can use the SET SESSION AUTHORIZATION statement to dump data, and the administrators must manually change and verify the passwords of exported files by referencing the SET SESSION AUTHORIZATION statement before import. The ALTER OWNER statement requires lower permissions.
-
-with-key=KEY
Specifies that the key length of AES128 must be 16 bytes.
NOTE: If the dump is encrypted, enter the -with-key=KEY parameter in the gs_restore command. If it is not entered, you will receive an error message. Enter the same key while entering the dump. When the dump format is c or t, the dumped content has been processed, and therefore the input is not restricted by the encryption.
NOTICE:
-
If any local additions need to be added to the template1 database during the installation, restore the output of gs_restore into an empty database with caution. Otherwise, you are likely to obtain errors due to duplicate definitions of the added objects. To create an empty database without any local additions, data from template0 rather than template1. Example:
CREATE DATABASE foo WITH TEMPLATE template0;
-
gs_restore cannot import large objects selectively. For example, it can only import the objects of a specified table. If an archive contains large objects, all large objects will be imported, or none of them will be restored if they are excluded by using -L, -t, or other parameters.
NOTE:
- The -d/-dbname and -f/-file parameters do not coexist.
- The -s/-schema-only and -a/-data-only parameters do not coexist.
- The -c/-clean and -a/-data-only parameters do not coexist.
- When -single-transaction is used, -j/-jobs must be a single job.
- -role must be used in conjunction with -rolepassword.
Connection parameters:
-
-h, -host=HOSTNAME
Specifies the host name. If the value begins with a slash (/), it is used as the directory for the UNIX domain socket. The default value is taken from the PGHOST environment variable. If it is not set, a UNIX domain socket connection is attempted.
This parameter is used only for defining names of the hosts outside MogDB. The names of the hosts inside MogDB must be 127.0.0.1.
-
-p, -port=PORT
TCP port or the local Unix-domain socket file extension on which the server is listening for connections. The default value is the PGPORT environment variable.
If the thread pool function is enabled, you are advised to use pooler port, that is, the listening port number plus 1.
-
-U, -username=NAME
Specifies the user name to connect to.
-
-w, -no-password
Never issues a password prompt. The connection attempt fails if the host requires password verification and the password is not provided in other ways. This parameter is useful in batch jobs and scripts in which no user password is required.
-
-W, -password=PASSWORD
User password for database connection. If the host uses the trust authentication policy, the administrator does not need to enter the -W parameter. If the -W parameter is not provided and you are not a system administrator, gs_restore will ask you to enter a password.
-
-role=ROLENAME
Specifies a role name for the import operation. If this parameter is selected, the SET ROLE statement will be issued after gs_restore connects to the database. It is useful when the authenticated user (specified by -U) lacks the permissions required by gs_restore. This parameter allows the user to switch to a role with the required permissions. Some installations have a policy against logging in directly as the initial user. This parameter allows data to be imported without violating the policy.
-
-rolepassword=ROLEPASSWORD
Role password.
Example
Special case: Execute the gsql tool. Run the following commands to import the MPPDB_backup.sql file in the export folder (in plain-text format) generated by gs_dump/gs_dumpall to the mogdb database:
gsql -d mogdb -p 5432 -W Bigdata@123 -f /home/omm/test/MPPDB_backup.sql
SET
SET
SET
SET
SET
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
SET
CREATE INDEX
REVOKE
REVOKE
GRANT
GRANT
total time: 30476 ms
gs_restore is used to import the files exported by gs_dump.
Example 1: Execute the gs_restore tool to import the exported MPPDB_backup.dmp file (custom format) to the mogdb database.
gs_restore -W Bigdata@123 backup/MPPDB_backup.dmp -p 5432 -d mogdb
gs_restore: restore operation successful
gs_restore: total time: 13053 ms
Example 2: Execute the gs_restore tool to import the exported MPPDB_backup.tar file (.tar format) to the mogdb database.
gs_restore backup/MPPDB_backup.tar -p 5432 -d mogdb
gs_restore[2017-07-21 19:16:26]: restore operation successful
gs_restore[2017-07-21 19:16:26]: total time: 21203 ms
Example 3: Execute the gs_restore tool to import the exported MPPDB_backup file (directory format) to the mogdb database.
gs_restore backup/MPPDB_backup -p 5432 -d mogdb
gs_restore[2017-07-21 19:16:26]: restore operation successful
gs_restore[2017-07-21 19:16:26]: total time: 21003 ms
Example 4: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import all the object definitions and data in the PUBLIC schema. Existing objects are deleted from the target database before the import. If an existing object references to an object in another schema, you need to manually delete the referenced object first.
gs_restore backup/MPPDB_backup.dmp -p 5432 -d mogdb -e -c -n PUBLIC
gs_restore: [archiver (db)] Error while PROCESSING TOC:
gs_restore: [archiver (db)] Error from TOC entry 313; 1259 337399 TABLE table1 gaussdba
gs_restore: [archiver (db)] could not execute query: ERROR: cannot drop table table1 because other objects depend on it
DETAIL: view t1.v1 depends on table table1
HINT: Use DROP ... CASCADE to drop the dependent objects too.
Command was: DROP TABLE public.table1;
Manually delete the referenced object and create it again after the import is complete.
gs_restore backup/MPPDB_backup.dmp -p 5432 -d mogdb -e -c -n PUBLIC
gs_restore[2017-07-21 19:16:26]: restore operation successful
gs_restore[2017-07-21 19:16:26]: total time: 2203 ms
Example 5: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import only the definition of table1 in the PUBLIC schema.
gs_restore backup/MPPDB_backup.dmp -p 5432 -d mogdb -e -c -s -n PUBLIC -t table1
gs_restore[2017-07-21 19:16:26]: restore operation successful
gs_restore[2017-07-21 19:16:26]: total time: 21000 ms
Example 6: Execute the gs_restore tool and run the following commands to import the MPPDB_backup.dmp file (in custom format). Specifically, import only the data of table1 in the PUBLIC schema.
gs_restore backup/MPPDB_backup.dmp -p 5432 -d mogdb -e -a -n PUBLIC -t table1
gs_restore[2017-07-21 19:16:26]: restore operation successful
gs_restore[2017-07-21 19:16:26]: total time: 20203 ms
gs_backup
Background
After MogDB is deployed, problems and exceptions may occur during database running. gs_backup, provided by MogDB, is used to help MogDB backup, restore important data, display help information and version information.
Prerequisites
- The MogDB database can be connected.
- During the restoration, backup files exist in the backup directory on all the nodes. If backup files are lost on any node, copy them to it from another node. For binary files, you need to change the node name in the file name.
- You need to execute gs_backup command as OS user omm.
Syntax
-
Backup database host
gs_backup -t backup --backup-dir=BACKUPDIR [-h HOSTNAME] [--parameter] [--binary] [--all] [-l LOGFILE]
-
Restore database host
gs_backup -t restore --backup-dir=BACKUPDIR [-h HOSTNAME] [--parameter] [--binary] [--all] [-l LOGFILE]
-
Display help information
gs_backup -? | --help
-
Display version information
gs_backup -V | --version
Parameter Description
The gs_backup tool can use the following types of parameters:
-
Backup database host parameters:
-
-h
Specifies the name of the host where the backup file is stored.
Range of values: host name. If the host name is not specified, it is distributed to MogDB.
-
-backup-dir=BACKUPDIR
The path to save the backup file.
-
-parameter
Back up the parameter file, only the parameter file is backed up by default if the -parameter, -binary and -all parameters are not specified.
-
-binary
Back up the binary file.
-
-all
Back up binary and parameter files.
-
-l
Specify the log file and its storage path.
Default value: $GAUSSLOG/om/gs_backup-YYYY-MM-DD_hhmmss.log
-
-
Restore database host parameters:
-
-h
Specify the name of the host to be recovered.
Range of values: host name. If no host is specified, MogDB is restored.
-
-backup-dir=BACKUPDIR
Recover file extraction path.
-
-parameter
Recover the parameter file, only the parameter file is recovered by default if the -parameter, -binary and -all parameters are not specified.
-
-binary
Recover the binary file.
-
-all
Recover binary and parameter files.
-
-l
Specify the log file and its storage path.
Default value: $GAUSSLOG/om/gs_backup-YYYY-MM-DD_hhmmss.log
-
-
Other parameters:
-
-?, -help
Display help information.
-
-V, -version
Display version information.
-
Examples
-
Use the gs_backup script to backup the database host.
gs_backup -t backup --backup-dir=/opt/software/mogdb/backup_dir -h plat1 --parameter Backing up MogDB. Parsing configuration files. Successfully parsed the configuration file. Performing remote backup. Remote backup succeeded. Successfully backed up MogDB.
-
Use the gs_backup script to restore the database host.
gs_backup -t restore --backup-dir=/opt/software/mogdb/backup_dir -h plat1 --parameter Restoring MogDB. Parsing the configuration file. Successfully parsed configuration files. Performing remote restoration. Remote restoration succeeded. Successfully restored MogDB.