- About MogDB
- Quick Start
- MogDB Playground
- Container-based MogDB Installation
- Installation on a Single Node
- MogDB Access
- Use CLI to Access MogDB
- Use GUI to Access MogDB
- Use Middleware to Access MogDB
- Use Programming Language to Access MogDB
- Using Sample Dataset Mogila
- Characteristic Description
- High Performance
- High Availability (HA)
- Maintainability
- Database Security
- Access Control Model
- Separation of Control and Access Permissions
- Database Encryption Authentication
- Data Encryption and Storage
- Database Audit
- Network Communication Security
- Resource Label
- Unified Audit
- Dynamic Data Anonymization
- Row-Level Access Control
- Password Strength Verification
- Equality Query in a Fully-encrypted Database
- Ledger Database Mechanism
- Enterprise-Level Features
- Support for Functions and Stored Procedures
- SQL Hints
- Full-Text Indexing
- Copy Interface for Error Tolerance
- Partitioning
- Support for Advanced Analysis Functions
- Materialized View
- HyperLogLog
- Creating an Index Online
- Autonomous Transaction
- Global Temporary Table
- Pseudocolumn ROWNUM
- Stored Procedure Debugging
- JDBC Client Load Balancing and Read/Write Isolation
- In-place Update Storage Engine
- Application Development Interfaces
- AI Capabilities
- Installation Guide
- Container Installation
- Simplified Installation Process
- Standard Installation
- Manual Installation
- Administrator Guide
- Routine Maintenance
- Starting and Stopping MogDB
- Using the gsql Client for Connection
- Routine Maintenance
- Checking OS Parameters
- Checking MogDB Health Status
- Checking Database Performance
- Checking and Deleting Logs
- Checking Time Consistency
- Checking The Number of Application Connections
- Routinely Maintaining Tables
- Routinely Recreating an Index
- Data Security Maintenance Suggestions
- Log Reference
- Primary and Standby Management
- MOT Engine
- Introducing MOT
- Using MOT
- Concepts of MOT
- Appendix
- Column-store Tables Management
- Backup and Restoration
- Importing and Exporting Data
- Importing Data
- Exporting Data
- Upgrade Guide
- Routine Maintenance
- AI Features Guide
- Overview
- Predictor: AI Query Time Forecasting
- X-Tuner: Parameter Optimization and Diagnosis
- SQLdiag: Slow SQL Discovery
- A-Detection: Status Monitoring
- Index-advisor: Index Recommendation
- DeepSQL
- AI-Native Database (DB4AI)
- Security Guide
- Developer Guide
- Application Development Guide
- Development Specifications
- Development Based on JDBC
- Overview
- JDBC Package, Driver Class, and Environment Class
- Development Process
- Loading the Driver
- Connecting to a Database
- Connecting to the Database (Using SSL)
- Running SQL Statements
- Processing Data in a Result Set
- Closing a Connection
- Managing Logs
- Example: Common Operations
- Example: Retrying SQL Queries for Applications
- Example: Importing and Exporting Data Through Local Files
- Example 2: Migrating Data from a MY Database to MogDB
- Example: Logic Replication Code
- Example: Parameters for Connecting to the Database in Different Scenarios
- JDBC API Reference
- java.sql.Connection
- java.sql.CallableStatement
- java.sql.DatabaseMetaData
- java.sql.Driver
- java.sql.PreparedStatement
- java.sql.ResultSet
- java.sql.ResultSetMetaData
- java.sql.Statement
- javax.sql.ConnectionPoolDataSource
- javax.sql.DataSource
- javax.sql.PooledConnection
- javax.naming.Context
- javax.naming.spi.InitialContextFactory
- CopyManager
- Development Based on ODBC
- Development Based on libpq
- Development Based on libpq
- libpq API Reference
- Database Connection Control Functions
- Database Statement Execution Functions
- Functions for Asynchronous Command Processing
- Functions for Canceling Queries in Progress
- Example
- Connection Characters
- Psycopg-Based Development
- Commissioning
- Appendices
- Stored Procedure
- User Defined Functions
- PL/pgSQL-SQL Procedural Language
- Scheduled Jobs
- Autonomous Transaction
- Logical Replication
- Logical Decoding
- Foreign Data Wrapper
- Materialized View
- Materialized View Overview
- Full Materialized View
- Incremental Materialized View
- Resource Load Management
- Overview
- Resource Management Preparation
- Application Development Guide
- Performance Tuning Guide
- System Optimization
- SQL Optimization
- WDR Snapshot Schema
- TPCC Performance Tuning Guide
- Reference Guide
- System Catalogs and System Views
- Overview of System Catalogs and System Views
- System Catalogs
- GS_AUDITING_POLICY
- GS_AUDITING_POLICY_ACCESS
- GS_AUDITING_POLICY_FILTERS
- GS_AUDITING_POLICY_PRIVILEGES
- GS_CLIENT_GLOBAL_KEYS
- GS_CLIENT_GLOBAL_KEYS_ARGS
- GS_COLUMN_KEYS
- GS_COLUMN_KEYS_ARGS
- GS_ENCRYPTED_COLUMNS
- GS_ENCRYPTED_PROC
- GS_GLOBAL_CHAIN
- GS_MASKING_POLICY
- GS_MASKING_POLICY_ACTIONS
- GS_MASKING_POLICY_FILTERS
- GS_MATVIEW
- GS_MATVIEW_DEPENDENCY
- GS_OPT_MODEL
- GS_POLICY_LABEL
- GS_RECYCLEBIN
- GS_TXN_SNAPSHOT
- GS_WLM_INSTANCE_HISTORY
- GS_WLM_OPERATOR_INFO
- GS_WLM_PLAN_ENCODING_TABLE
- GS_WLM_PLAN_OPERATOR_INFO
- GS_WLM_EC_OPERATOR_INFO
- PG_AGGREGATE
- PG_AM
- PG_AMOP
- PG_AMPROC
- PG_APP_WORKLOADGROUP_MAPPING
- PG_ATTRDEF
- PG_ATTRIBUTE
- PG_AUTHID
- PG_AUTH_HISTORY
- PG_AUTH_MEMBERS
- PG_CAST
- PG_CLASS
- PG_COLLATION
- PG_CONSTRAINT
- PG_CONVERSION
- PG_DATABASE
- PG_DB_ROLE_SETTING
- PG_DEFAULT_ACL
- PG_DEPEND
- PG_DESCRIPTION
- PG_DIRECTORY
- PG_ENUM
- PG_EXTENSION
- PG_EXTENSION_DATA_SOURCE
- PG_FOREIGN_DATA_WRAPPER
- PG_FOREIGN_SERVER
- PG_FOREIGN_TABLE
- PG_INDEX
- PG_INHERITS
- PG_JOB
- PG_JOB_PROC
- PG_LANGUAGE
- PG_LARGEOBJECT
- PG_LARGEOBJECT_METADATA
- PG_NAMESPACE
- PG_OBJECT
- PG_OPCLASS
- PG_OPERATOR
- PG_OPFAMILY
- PG_PARTITION
- PG_PLTEMPLATE
- PG_PROC
- PG_RANGE
- PG_RESOURCE_POOL
- PG_REWRITE
- PG_RLSPOLICY
- PG_SECLABEL
- PG_SHDEPEND
- PG_SHDESCRIPTION
- PG_SHSECLABEL
- PG_STATISTIC
- PG_STATISTIC_EXT
- PG_SYNONYM
- PG_TABLESPACE
- PG_TRIGGER
- PG_TS_CONFIG
- PG_TS_CONFIG_MAP
- PG_TS_DICT
- PG_TS_PARSER
- PG_TS_TEMPLATE
- PG_TYPE
- PG_USER_MAPPING
- PG_USER_STATUS
- PG_WORKLOAD_GROUP
- PLAN_TABLE_DATA
- STATEMENT_HISTORY
- System Views
- GET_GLOBAL_PREPARED_XACTS
- GS_AUDITING
- GS_AUDITING_ACCESS
- GS_AUDITING_PRIVILEGE
- GS_CLUSTER_RESOURCE_INFO
- GS_INSTANCE_TIME
- GS_LABELS
- GS_MASKING
- GS_MATVIEWS
- GS_SESSION_MEMORY
- GS_SESSION_CPU_STATISTICS
- GS_SESSION_MEMORY_CONTEXT
- GS_SESSION_MEMORY_DETAIL
- GS_SESSION_MEMORY_STATISTICS
- GS_SQL_COUNT
- GS_WLM_CGROUP_INFO
- GS_WLM_PLAN_OPERATOR_HISTORY
- GS_WLM_REBUILD_USER_RESOURCE_POOL
- GS_WLM_RESOURCE_POOL
- GS_WLM_USER_INFO
- GS_STAT_SESSION_CU
- GS_TOTAL_MEMORY_DETAIL
- MPP_TABLES
- PG_AVAILABLE_EXTENSION_VERSIONS
- PG_AVAILABLE_EXTENSIONS
- PG_COMM_DELAY
- PG_COMM_RECV_STREAM
- PG_COMM_SEND_STREAM
- PG_COMM_STATUS
- PG_CONTROL_GROUP_CONFIG
- PG_CURSORS
- PG_EXT_STATS
- PG_GET_INVALID_BACKENDS
- PG_GET_SENDERS_CATCHUP_TIME
- PG_GROUP
- PG_GTT_RELSTATS
- PG_GTT_STATS
- PG_GTT_ATTACHED_PIDS
- PG_INDEXES
- PG_LOCKS
- PG_NODE_ENV
- PG_OS_THREADS
- PG_PREPARED_STATEMENTS
- PG_PREPARED_XACTS
- PG_REPLICATION_SLOTS
- PG_RLSPOLICIES
- PG_ROLES
- PG_RULES
- PG_SECLABELS
- PG_SETTINGS
- PG_SHADOW
- PG_STATS
- PG_STAT_ACTIVITY
- PG_STAT_ALL_INDEXES
- PG_STAT_ALL_TABLES
- PG_STAT_BAD_BLOCK
- PG_STAT_BGWRITER
- PG_STAT_DATABASE
- PG_STAT_DATABASE_CONFLICTS
- PG_STAT_USER_FUNCTIONS
- PG_STAT_USER_INDEXES
- PG_STAT_USER_TABLES
- PG_STAT_REPLICATION
- PG_STAT_SYS_INDEXES
- PG_STAT_SYS_TABLES
- PG_STAT_XACT_ALL_TABLES
- PG_STAT_XACT_SYS_TABLES
- PG_STAT_XACT_USER_FUNCTIONS
- PG_STAT_XACT_USER_TABLES
- PG_STATIO_ALL_INDEXES
- PG_STATIO_ALL_SEQUENCES
- PG_STATIO_ALL_TABLES
- PG_STATIO_SYS_INDEXES
- PG_STATIO_SYS_SEQUENCES
- PG_STATIO_SYS_TABLES
- PG_STATIO_USER_INDEXES
- PG_STATIO_USER_SEQUENCES
- PG_STATIO_USER_TABLES
- PG_TABLES
- PG_TDE_INFO
- PG_THREAD_WAIT_STATUS
- PG_TIMEZONE_ABBREVS
- PG_TIMEZONE_NAMES
- PG_TOTAL_MEMORY_DETAIL
- PG_TOTAL_USER_RESOURCE_INFO
- PG_TOTAL_USER_RESOURCE_INFO_OID
- PG_USER
- PG_USER_MAPPINGS
- PG_VARIABLE_INFO
- PG_VIEWS
- PLAN_TABLE
- GS_FILE_STAT
- GS_OS_RUN_INFO
- GS_REDO_STAT
- GS_SESSION_STAT
- GS_SESSION_TIME
- GS_THREAD_MEMORY_CONTEXT
- Functions and Operators
- Logical Operators
- Comparison Operators
- Character Processing Functions and Operators
- Binary String Functions and Operators
- Bit String Functions and Operators
- Mode Matching Operators
- Mathematical Functions and Operators
- Date and Time Processing Functions and Operators
- Type Conversion Functions
- Geometric Functions and Operators
- Network Address Functions and Operators
- Text Search Functions and Operators
- JSON/JSONB Functions and Operators
- HLL Functions and Operators
- SEQUENCE Functions
- Array Functions and Operators
- Range Functions and Operators
- Aggregate Functions
- Window Functions
- Security Functions
- Ledger Database Functions
- Encrypted Equality Functions
- Set Returning Functions
- Conditional Expression Functions
- System Information Functions
- System Administration Functions
- Configuration Settings Functions
- Universal File Access Functions
- Server Signal Functions
- Backup and Restoration Control Functions
- Snapshot Synchronization Functions
- Database Object Functions
- Advisory Lock Functions
- Logical Replication Functions
- Segment-Page Storage Functions
- Other Functions
- Undo System Functions
- Statistics Information Functions
- Trigger Functions
- Hash Function
- Prompt Message Function
- Global Temporary Table Functions
- Fault Injection System Function
- AI Feature Functions
- Dynamic Data Masking Functions
- Other System Functions
- Internal Functions
- Obsolete Functions
- Supported Data Types
- Numeric Types
- Monetary Types
- Boolean Types
- Enumerated Types
- Character Types
- Binary Types
- Date/Time Types
- Geometric
- Network Address Types
- Bit String Types
- Text Search Types
- UUID
- JSON/JSONB Types
- HLL
- Array Types
- Range
- OID Types
- Pseudo-Types
- Data Types Supported by Column-store Tables
- XML Types
- Data Type Used by the Ledger Database
- SQL Syntax
- ABORT
- ALTER AGGREGATE
- ALTER AUDIT POLICY
- ALTER DATABASE
- ALTER DATA SOURCE
- ALTER DEFAULT PRIVILEGES
- ALTER DIRECTORY
- ALTER EXTENSION
- ALTER FOREIGN TABLE
- ALTER FUNCTION
- ALTER GROUP
- ALTER INDEX
- ALTER LANGUAGE
- ALTER LARGE OBJECT
- ALTER MASKING POLICY
- ALTER MATERIALIZED VIEW
- ALTER OPERATOR
- ALTER RESOURCE LABEL
- ALTER RESOURCE POOL
- ALTER ROLE
- ALTER ROW LEVEL SECURITY POLICY
- ALTER RULE
- ALTER SCHEMA
- ALTER SEQUENCE
- ALTER SERVER
- ALTER SESSION
- ALTER SYNONYM
- ALTER SYSTEM KILL SESSION
- ALTER SYSTEM SET
- ALTER TABLE
- ALTER TABLE PARTITION
- ALTER TABLE SUBPARTITION
- ALTER TABLESPACE
- ALTER TEXT SEARCH CONFIGURATION
- ALTER TEXT SEARCH DICTIONARY
- ALTER TRIGGER
- ALTER TYPE
- ALTER USER
- ALTER USER MAPPING
- ALTER VIEW
- ANALYZE | ANALYSE
- BEGIN
- CALL
- CHECKPOINT
- CLEAN CONNECTION
- CLOSE
- CLUSTER
- COMMENT
- COMMIT | END
- COMMIT PREPARED
- CONNECT BY
- COPY
- CREATE AGGREGATE
- CREATE AUDIT POLICY
- CREATE CAST
- CREATE CLIENT MASTER KEY
- CREATE COLUMN ENCRYPTION KEY
- CREATE DATABASE
- CREATE DATA SOURCE
- CREATE DIRECTORY
- CREATE EXTENSION
- CREATE FOREIGN TABLE
- CREATE FUNCTION
- CREATE GROUP
- CREATE INCREMENTAL MATERIALIZED VIEW
- CREATE INDEX
- CREATE LANGUAGE
- CREATE MASKING POLICY
- CREATE MATERIALIZED VIEW
- CREATE MODEL
- CREATE OPERATOR
- CREATE PACKAGE
- CREATE ROW LEVEL SECURITY POLICY
- CREATE PROCEDURE
- CREATE RESOURCE LABEL
- CREATE RESOURCE POOL
- CREATE ROLE
- CREATE RULE
- CREATE SCHEMA
- CREATE SEQUENCE
- CREATE SERVER
- CREATE SYNONYM
- CREATE TABLE
- CREATE TABLE AS
- CREATE TABLE PARTITION
- CREATE TABLE SUBPARTITION
- CREATE TABLESPACE
- CREATE TEXT SEARCH CONFIGURATION
- CREATE TEXT SEARCH DICTIONARY
- CREATE TRIGGER
- CREATE TYPE
- CREATE USER
- CREATE USER MAPPING
- CREATE VIEW
- CREATE WEAK PASSWORD DICTIONARY
- CURSOR
- DEALLOCATE
- DECLARE
- DELETE
- DO
- DROP AGGREGATE
- DROP AUDIT POLICY
- DROP CAST
- DROP CLIENT MASTER KEY
- DROP COLUMN ENCRYPTION KEY
- DROP DATABASE
- DROP DATA SOURCE
- DROP DIRECTORY
- DROP EXTENSION
- DROP FOREIGN TABLE
- DROP FUNCTION
- DROP GROUP
- DROP INDEX
- DROP LANGUAGE
- DROP MASKING POLICY
- DROP MATERIALIZED VIEW
- DROP MODEL
- DROP OPERATOR
- DROP OWNED
- DROP PACKAGE
- DROP PROCEDURE
- DROP RESOURCE LABEL
- DROP RESOURCE POOL
- DROP ROW LEVEL SECURITY POLICY
- DROP ROLE
- DROP RULE
- DROP SCHEMA
- DROP SEQUENCE
- DROP SERVER
- DROP SYNONYM
- DROP TABLE
- DROP TABLESPACE
- DROP TEXT SEARCH CONFIGURATION
- DROP TEXT SEARCH DICTIONARY
- DROP TRIGGER
- DROP TYPE
- DROP USER
- DROP USER MAPPING
- DROP VIEW
- DROP WEAK PASSWORD DICTIONARY
- EXECUTE
- EXECUTE DIRECT
- EXPLAIN
- EXPLAIN PLAN
- FETCH
- GRANT
- INSERT
- LOCK
- MOVE
- MERGE INTO
- PREDICT BY
- PREPARE
- PREPARE TRANSACTION
- PURGE
- REASSIGN OWNED
- REFRESH INCREMENTAL MATERIALIZED VIEW
- REFRESH MATERIALIZED VIEW
- REINDEX
- RELEASE SAVEPOINT
- RESET
- REVOKE
- ROLLBACK
- ROLLBACK PREPARED
- ROLLBACK TO SAVEPOINT
- SAVEPOINT
- SELECT
- SELECT INTO
- SET
- SET CONSTRAINTS
- SET ROLE
- SET SESSION AUTHORIZATION
- SET TRANSACTION
- SHOW
- SHUTDOWN
- SNAPSHOT
- START TRANSACTION
- TIMECAPSULE TABLE
- TRUNCATE
- UPDATE
- VACUUM
- VALUES
- SQL Reference
- MogDB SQL
- Keywords
- Constant and Macro
- Expressions
- Type Conversion
- Full Text Search
- Introduction
- Tables and Indexes
- Controlling Text Search
- Additional Features
- Parser
- Dictionaries
- Configuration Examples
- Testing and Debugging Text Search
- Limitations
- System Operation
- Controlling Transactions
- DDL Syntax Overview
- DML Syntax Overview
- DCL Syntax Overview
- Appendix
- GUC Parameters
- GUC Parameter Usage
- File Location
- Connection and Authentication
- Resource Consumption
- Parallel Import
- Write Ahead Log
- HA Replication
- Memory Table
- Query Planning
- Error Reporting and Logging
- Alarm Detection
- Statistics During the Database Running
- Load Management
- Automatic Vacuuming
- Default Settings of Client Connection
- Lock Management
- Version and Platform Compatibility
- Faut Tolerance
- Connection Pool Parameters
- MogDB Transaction
- Developer Options
- Auditing
- Upgrade Parameters
- Miscellaneous Parameters
- Wait Events
- Query
- System Performance Snapshot
- Security Configuration
- Global Temporary Table
- HyperLogLog
- Scheduled Task
- Thread Pool
- User-defined Functions
- Backup and Restoration
- Undo
- DCF Parameters Settings
- Flashback
- Rollback Parameters
- Reserved Parameters
- AI Features
- Appendix
- Schema
- Information Schema
- DBE_PERF
- Overview
- OS
- Instance
- Memory
- File
- Object
- STAT_USER_TABLES
- SUMMARY_STAT_USER_TABLES
- GLOBAL_STAT_USER_TABLES
- STAT_USER_INDEXES
- SUMMARY_STAT_USER_INDEXES
- GLOBAL_STAT_USER_INDEXES
- STAT_SYS_TABLES
- SUMMARY_STAT_SYS_TABLES
- GLOBAL_STAT_SYS_TABLES
- STAT_SYS_INDEXES
- SUMMARY_STAT_SYS_INDEXES
- GLOBAL_STAT_SYS_INDEXES
- STAT_ALL_TABLES
- SUMMARY_STAT_ALL_TABLES
- GLOBAL_STAT_ALL_TABLES
- STAT_ALL_INDEXES
- SUMMARY_STAT_ALL_INDEXES
- GLOBAL_STAT_ALL_INDEXES
- STAT_DATABASE
- SUMMARY_STAT_DATABASE
- GLOBAL_STAT_DATABASE
- STAT_DATABASE_CONFLICTS
- SUMMARY_STAT_DATABASE_CONFLICTS
- GLOBAL_STAT_DATABASE_CONFLICTS
- STAT_XACT_ALL_TABLES
- SUMMARY_STAT_XACT_ALL_TABLES
- GLOBAL_STAT_XACT_ALL_TABLES
- STAT_XACT_SYS_TABLES
- SUMMARY_STAT_XACT_SYS_TABLES
- GLOBAL_STAT_XACT_SYS_TABLES
- STAT_XACT_USER_TABLES
- SUMMARY_STAT_XACT_USER_TABLES
- GLOBAL_STAT_XACT_USER_TABLES
- STAT_XACT_USER_FUNCTIONS
- SUMMARY_STAT_XACT_USER_FUNCTIONS
- GLOBAL_STAT_XACT_USER_FUNCTIONS
- STAT_BAD_BLOCK
- SUMMARY_STAT_BAD_BLOCK
- GLOBAL_STAT_BAD_BLOCK
- STAT_USER_FUNCTIONS
- SUMMARY_STAT_USER_FUNCTIONS
- GLOBAL_STAT_USER_FUNCTIONS
- Workload
- Session/Thread
- SESSION_STAT
- GLOBAL_SESSION_STAT
- SESSION_TIME
- GLOBAL_SESSION_TIME
- SESSION_MEMORY
- GLOBAL_SESSION_MEMORY
- SESSION_MEMORY_DETAIL
- GLOBAL_SESSION_MEMORY_DETAIL
- SESSION_STAT_ACTIVITY
- GLOBAL_SESSION_STAT_ACTIVITY
- THREAD_WAIT_STATUS
- GLOBAL_THREAD_WAIT_STATUS
- LOCAL_THREADPOOL_STATUS
- GLOBAL_THREADPOOL_STATUS
- SESSION_CPU_RUNTIME
- SESSION_MEMORY_RUNTIME
- STATEMENT_IOSTAT_COMPLEX_RUNTIME
- LOCAL_ACTIVE_SESSION
- Transaction
- Query
- STATEMENT
- SUMMARY_STATEMENT
- STATEMENT_COUNT
- GLOBAL_STATEMENT_COUNT
- SUMMARY_STATEMENT_COUNT
- GLOBAL_STATEMENT_COMPLEX_HISTORY
- GLOBAL_STATEMENT_COMPLEX_HISTORY_TABLE
- GLOBAL_STATEMENT_COMPLEX_RUNTIME
- STATEMENT_RESPONSETIME_PERCENTILE
- STATEMENT_USER_COMPLEX_HISTORY
- STATEMENT_COMPLEX_RUNTIME
- STATEMENT_COMPLEX_HISTORY_TABLE
- STATEMENT_COMPLEX_HISTORY
- STATEMENT_WLMSTAT_COMPLEX_RUNTIME
- STATEMENT_HISTORY
- Cache/IO
- STATIO_USER_TABLES
- SUMMARY_STATIO_USER_TABLES
- GLOBAL_STATIO_USER_TABLES
- STATIO_USER_INDEXES
- SUMMARY_STATIO_USER_INDEXES
- GLOBAL_STATIO_USER_INDEXES
- STATIO_USER_SEQUENCES
- SUMMARY_STATIO_USER_SEQUENCES
- GLOBAL_STATIO_USER_SEQUENCES
- STATIO_SYS_TABLES
- SUMMARY_STATIO_SYS_TABLES
- GLOBAL_STATIO_SYS_TABLES
- STATIO_SYS_INDEXES
- SUMMARY_STATIO_SYS_INDEXES
- GLOBAL_STATIO_SYS_INDEXES
- STATIO_SYS_SEQUENCES
- SUMMARY_STATIO_SYS_SEQUENCES
- GLOBAL_STATIO_SYS_SEQUENCES
- STATIO_ALL_TABLES
- SUMMARY_STATIO_ALL_TABLES
- GLOBAL_STATIO_ALL_TABLES
- STATIO_ALL_INDEXES
- SUMMARY_STATIO_ALL_INDEXES
- GLOBAL_STATIO_ALL_INDEXES
- STATIO_ALL_SEQUENCES
- SUMMARY_STATIO_ALL_SEQUENCES
- GLOBAL_STATIO_ALL_SEQUENCES
- GLOBAL_STAT_DB_CU
- GLOBAL_STAT_SESSION_CU
- Utility
- REPLICATION_STAT
- GLOBAL_REPLICATION_STAT
- REPLICATION_SLOTS
- GLOBAL_REPLICATION_SLOTS
- BGWRITER_STAT
- GLOBAL_BGWRITER_STAT
- GLOBAL_CKPT_STATUS
- GLOBAL_DOUBLE_WRITE_STATUS
- GLOBAL_PAGEWRITER_STATUS
- GLOBAL_RECORD_RESET_TIME
- GLOBAL_REDO_STATUS
- GLOBAL_RECOVERY_STATUS
- CLASS_VITAL_INFO
- USER_LOGIN
- SUMMARY_USER_LOGIN
- GLOBAL_GET_BGWRITER_STATUS
- GLOBAL_SINGLE_FLUSH_DW_STATUS
- GLOBAL_CANDIDATE_STATUS
- Lock
- Wait Events
- Configuration
- Operator
- Workload Manager
- Global Plancache
- RTO
- Appendix
- DBE_PLDEBUGGER Schema
- Overview
- DBE_PLDEBUGGER.turn_on
- DBE_PLDEBUGGER.turn_off
- DBE_PLDEBUGGER.local_debug_server_info
- DBE_PLDEBUGGER.attach
- DBE_PLDEBUGGER.next
- DBE_PLDEBUGGER.continue
- DBE_PLDEBUGGER.abort
- DBE_PLDEBUGGER.print_var
- DBE_PLDEBUGGER.info_code
- DBE_PLDEBUGGER.step
- DBE_PLDEBUGGER.add_breakpoint
- DBE_PLDEBUGGER.delete_breakpoint
- DBE_PLDEBUGGER.info_breakpoints
- DBE_PLDEBUGGER.backtrace
- DBE_PLDEBUGGER.finish
- DBE_PLDEBUGGER.set_var
- DB4AI Schema
- Tool Reference
- Tool Overview
- Client Tool
- Server Tools
- Tools Used in the Internal System
- gaussdb
- gs_backup
- gs_basebackup
- gs_ctl
- gs_initdb
- gs_install
- gs_install_plugin
- gs_install_plugin_local
- gs_postuninstall
- gs_preinstall
- gs_sshexkey
- gs_tar
- gs_uninstall
- gs_upgradectl
- gs_expansion
- gs_dropnode
- gs_probackup
- gstrace
- kdb5_util
- kadmin.local
- kinit
- klist
- krb5kdc
- kdestroy
- pg_config
- pg_controldata
- pg_recvlogical
- pg_resetxlog
- pg_archivecleanup
- pssh
- pscp
- transfer.py
- FAQ
- System Catalogs and Views Supported by gs_collector
- Extension Reference
- Error Code Reference
- Description of SQL Error Codes
- Third-Party Library Error Codes
- GAUSS-00001 - GAUSS-00100
- GAUSS-00101 - GAUSS-00200
- GAUSS 00201 - GAUSS 00300
- GAUSS 00301 - GAUSS 00400
- GAUSS 00401 - GAUSS 00500
- GAUSS 00501 - GAUSS 00600
- GAUSS 00601 - GAUSS 00700
- GAUSS 00701 - GAUSS 00800
- GAUSS 00801 - GAUSS 00900
- GAUSS 00901 - GAUSS 01000
- GAUSS 01001 - GAUSS 01100
- GAUSS 01101 - GAUSS 01200
- GAUSS 01201 - GAUSS 01300
- GAUSS 01301 - GAUSS 01400
- GAUSS 01401 - GAUSS 01500
- GAUSS 01501 - GAUSS 01600
- GAUSS 01601 - GAUSS 01700
- GAUSS 01701 - GAUSS 01800
- GAUSS 01801 - GAUSS 01900
- GAUSS 01901 - GAUSS 02000
- GAUSS 02001 - GAUSS 02100
- GAUSS 02101 - GAUSS 02200
- GAUSS 02201 - GAUSS 02300
- GAUSS 02301 - GAUSS 02400
- GAUSS 02401 - GAUSS 02500
- GAUSS 02501 - GAUSS 02600
- GAUSS 02601 - GAUSS 02700
- GAUSS 02701 - GAUSS 02800
- GAUSS 02801 - GAUSS 02900
- GAUSS 02901 - GAUSS 03000
- GAUSS 03001 - GAUSS 03100
- GAUSS 03101 - GAUSS 03200
- GAUSS 03201 - GAUSS 03300
- GAUSS 03301 - GAUSS 03400
- GAUSS 03401 - GAUSS 03500
- GAUSS 03501 - GAUSS 03600
- GAUSS 03601 - GAUSS 03700
- GAUSS 03701 - GAUSS 03800
- GAUSS 03801 - GAUSS 03900
- GAUSS 03901 - GAUSS 04000
- GAUSS 04001 - GAUSS 04100
- GAUSS 04101 - GAUSS 04200
- GAUSS 04201 - GAUSS 04300
- GAUSS 04301 - GAUSS 04400
- GAUSS 04401 - GAUSS 04500
- GAUSS 04501 - GAUSS 04600
- GAUSS 04601 - GAUSS 04700
- GAUSS 04701 - GAUSS 04800
- GAUSS 04801 - GAUSS 04900
- GAUSS 04901 - GAUSS 05000
- GAUSS 05001 - GAUSS 05100
- GAUSS 05101 - GAUSS 05200
- GAUSS 05201 - GAUSS 05300
- GAUSS 05301 - GAUSS 05400
- GAUSS 05401 - GAUSS 05500
- GAUSS 05501 - GAUSS 05600
- GAUSS 05601 - GAUSS 05700
- GAUSS 05701 - GAUSS 05800
- GAUSS 05801 - GAUSS 05900
- GAUSS 05901 - GAUSS 06000
- GAUSS 06001 - GAUSS 06100
- GAUSS 06101 - GAUSS 06200
- GAUSS 06201 - GAUSS 06300
- GAUSS 06301 - GAUSS 06400
- GAUSS 06401 - GAUSS 06500
- GAUSS 06501 - GAUSS 06600
- GAUSS 06601 - GAUSS 06700
- GAUSS 06701 - GAUSS 06800
- GAUSS 06801 - GAUSS 06900
- GAUSS 06901 - GAUSS 07000
- GAUSS 07001 - GAUSS 07100
- GAUSS 07101 - GAUSS 07200
- GAUSS 07201 - GAUSS 07300
- GAUSS 07301 - GAUSS 07400
- GAUSS 07401 - GAUSS 07480
- GAUSS 50000 - GAUSS 50999
- GAUSS 51000 - GAUSS 51999
- GAUSS 52000 - GAUSS 52999
- GAUSS 53000 - GAUSS 53699
- Error Log Reference
- System Catalogs and System Views
- Common Faults and Identification Guide
- Common Fault Locating Methods
- Common Fault Locating Cases
- Core Fault Locating
- Permission/Session/Data Type Fault Location
- Service/High Availability/Concurrency Fault Location
- Table/Partition Table Fault Location
- File System/Disk/Memory Fault Location
- After You Run the du Command to Query Data File Size In the XFS File System, the Query Result Is Greater than the Actual File Size
- File Is Damaged in the XFS File System
- Insufficient Memory
- "Error:No space left on device" Is Displayed
- When the TPC-C is running and a disk to be injected is full, the TPC-C stops responding
- Disk Space Usage Reaches the Threshold and the Database Becomes Read-only
- SQL Fault Location
- Index Fault Location
- Source Code Parsing
- FAQs
- Glossary
Development Specifications
If the connection pool mechanism is used during application development, comply with the following specifications:
- If GUC parameters are set in the connection, run SET SESSION AUTHORIZATION DEFAULT;RESET ALL; to clear the connection status before you return the connection to the connection pool.
- If a temporary table is used, delete the temporary table before you return the connection to the connection pool.
If you do not do so, the connection in the connection pool will be stateful, which affects subsequent operations on the connection pool.
Overview
Introduction
Although ISO has issued SQL-92, SQL:1999, SQL:2006, and other standards for SQL, due to the characteristics of different databases, the same functions are not the same in the implementation of their products, which also makes the relevant grammatical rules different. Therefore, when formulating specific development specifications, it is necessary to write corresponding specifications for different databases.
This specification emphasizes practicability and operability. According to the common problems and mistakes easily made by developers in the coding process, detailed and clear specifications and constraints are carried out on all aspects of code writing. It mainly includes the following content:
- Naming specification
- Design specification
- Syntax specification
- Optimization-related specification
- PG compatibility
In addition, specific examples are given for each detailed rule of the specification.
Application Scope
This specification applies to MogDB 1.1.0 and later versions.
Naming Specification
Unified Object Naming Specification
The unified standards for naming database objects, such as database, schema, table, column, view, index, constraint, sequence, function, trigger, etc. are as follows:
-
The length cannot exceed 63 characters.
-
It is advised to use meaningful English vocabularies.
-
It is advised to use a combination of lowercase letters, numbers, and underscores.
-
It is not advised to use double quotation marks (") unless it must contain special characters such as uppercase letters or spaces.
-
It is not advised to start with PG, GS (to avoid confusion with the system DB object), and it is not advised to start with a number.
-
It is forbidden to use reserved words. Refer to official documents for reserved keywords.
-
The number of columns that a table can contain varies from 250 to 1600 depending on the field type.
Temporary and Backup Object Naming
- It is recommended to add a date to the names of temporary or backup database objects (such as table), for example,
dba.trade_record_1970_01_01
(where dba is the DBA-specific schema, trade_record is the table name, and 1970_01_01 is the backup date).
Tablespace Naming
- The user tablespace of the database is represented by ts_<tablespace name>, where the tablespace name contains the following two categories:
- Data space: For the user's default tablespace, it is represented by default. For other tablespaces, it is represented according to the category of the tables hosted on the tablespace. For example, the table that stores code is represented by code. The table that stores customer information is represented by customer. Try to use one tablespace to host the tables of that category. If a table is particularly large, consider using a separate tablespace.
- Index space: add idx_ in front of the name of the corresponding data tablespace. For example, the index space for the user's default tablespace is represented by ts_idx_default. For index tablespace of code table, use ts_idx_code.
Index Naming
-
Index object naming rules: table_column_idx, such as student_name_idx, the index naming method is the default naming method when the index name is not explicitly specified when an index is created for the MogDB database.
Therefore, it is advised to create indexes without naming them explicitly, but using DBMS defaults.
create unique index on departments(department_id);
CREATE INDEX
\di
+----------+-------------------------------+--------+---------+
| Schema | Name | Type | Owner |
|----------+-------------------------------+--------+---------|
| mogdb | departments_department_id_idx | index | mogdb |
+----------+-------------------------------+--------+---------+
SELECT 1
Variables Naming
-
English words should be used for naming, and pinyin should be avoided, especially pinyin abbreviations should not be used. Chinese or special characters are not allowed in the naming.
-
If no complicated operations are involved, simple applications such as counting are always defined by number.
Partitioned Table Naming
-
The name of the partitioned table follows the naming rules of ordinary tables.
-
A table is partitioned by time range (one partition per month), and the partition name is PART_YYYYMM.
For example, PART_201901 and PART_201902
Function Naming
-
The name should be consistent with its actual function. A verb should be used as a prefix command to cause an action to take place.
Example: The following naming conforms to the specification:
func_addgroups (Add multiple groups) func_addgroup (Add one group)
Design Specification
Database Design
-
It is recommended to name the database after the business function, which is simple and intuitive.
-
The database is preferentially created using the PG compatibility type.
-
The recommended database encoding is utf8.
Tablespace Design
-
The frequently used tables and indexes are stored in a separate tablespace, which should be created on a disk with good performance.
-
Tables and indexes that are dominated by historical data or are less active can be stored in tablespaces with poor disk performance.
-
Tables and indexes can be stored separately in different tablespaces.
-
Tablespaces can also be divided by database, by schema, or by business function.
-
Each database/schema corresponds to a tablespace and a corresponding index tablespace.
Schema Design
- When you perform a user creation under a database, a schema with the same name will be created under that database by default.
- It is not recommended to create database objects under the default public schema.
- Create a schema that is different from the username for the business to use.
Table Design
-
When designing the table structure, it should be planned to avoid adding fields frequently or modifying field types or lengths.
-
Comment information must be added to the table, with the table name matching the comment information.
-
The use of the unlogged/ temp/temporary keyword to create business tables is prohibited.
-
The data type must be strictly consistent for the fields that are used as join relationships between tables to avoid indexes not working properly.
-
It is forbidden to use VARCHAR or other character types to store date values, and if used, operations cannot be done on this field and need to be strictly defined in the data specification.
-
For astore tables with frequent updates, it is recommended to specify the table fillfactor=85 when building the table to reserve space for HOT.
-
Tables used for frequent updates should be placed separately in a tablespace with good storage performance.
-
It is recommended to consider partitioning for tables with data volume over billion or occupying more than 10GB on disk.
-
The data types defined in the fields in the table structure are consistent with those defined in the application, and the field proofreading rules are consistent between tables to avoid error reporting or the inability to use indexes.
Note: For example, the data type of the user_id field of table A is defined as varchar, but the SQL statement is where user_id=1234;
Partitioned Table Design
-
The number of partitioned tables is not recommended to exceed 1000.
-
Partitioned tables can be selected with different tablespaces by frequency of use.
-
The primary key or unique index must contain partitioned keys.
-
For tables with larger data volume, partition according to the attributes of table data to get better performance.
-
To convert a normal table into a partitioned table, you need to create a new partitioned table, and then import the data from the normal table into the newly created partitioned table. Therefore, when you initially design the table, please plan in advance whether to use partitioned tables according to your business.
-
It is recommended that for businesses with regular historical data deletion needs, the tables are partitioned by time, and when deleting, do not use the DELETE operation, but DROP or TRUNCATE the corresponding table.
-
It is not recommended to use global indexes in partitioned tables, because doing partition maintenance operations may cause global indexes to fail, making it difficult to maintain.
Use of Partitioned Table
Operate on the range partitioned table as follows.
- Create a tablespace
mogdb=# CREATE TABLESPACE example1 RELATIVE LOCATION 'tablespace1/tablespace_1';
mogdb=# CREATE TABLESPACE example2 RELATIVE LOCATION 'tablespace2/tablespace_2';
mogdb=# CREATE TABLESPACE example3 RELATIVE LOCATION 'tablespace3/tablespace_3';
mogdb=# CREATE TABLESPACE example4 RELATIVE LOCATION 'tablespace4/tablespace_4';
When the following information is displayed, it means the creation is successful.
CREATE TABLESPACE
- Create a partitioned table
mogdb=# CREATE TABLE mogdb_usr.customer_address
(
ca_address_sk integer NOT NULL ,
ca_address_id character(16) NOT NULL ,
ca_street_number character(10) ,
ca_street_name character varying(60) ,
ca_street_type character(15) ,
ca_suite_number character(10) ,
ca_city character varying(60) ,
ca_county character varying(30) ,
ca_state character(2) ,
ca_zip character(10) ,
ca_country character varying(20) ,
ca_gmt_offset numeric(5,2) ,
ca_location_type character(20)
)
TABLESPACE example1
PARTITION BY RANGE (ca_address_sk)
(
PARTITION P1 VALUES LESS THAN(5000),
PARTITION P2 VALUES LESS THAN(10000),
PARTITION P3 VALUES LESS THAN(15000),
PARTITION P4 VALUES LESS THAN(20000),
PARTITION P5 VALUES LESS THAN(25000),
PARTITION P6 VALUES LESS THAN(30000),
PARTITION P7 VALUES LESS THAN(40000),
PARTITION P8 VALUES LESS THAN(MAXVALUE) TABLESPACE example2
)
ENABLE ROW MOVEMENT;
When the following information is displayed, it means the creation is successful.
CREATE TABLE
It is recommended that the number of column-based partitioned tables does not exceed 1,000.
- Insert data
Insert data from table mogdb_usr.customer_address into table mogdb_usr.customer_address_bak. For example, you have created a backup table mogdb_usr.customer_address_bak of table mogdb_usr.customer_address in the database, and now you need to insert the data from table mogdb_usr.customer_address into table mogdb_usr. customer_address_bak, then you can run the following command.
mogdb=# CREATE TABLE mogdb_usr.customer_address_bak
(
ca_address_sk integer NOT NULL ,
ca_address_id character(16) NOT NULL ,
ca_street_number character(10) ,
ca_street_name character varying(60) ,
ca_street_type character(15) ,
ca_suite_number character(10) ,
ca_city character varying(60) ,
ca_county character varying(30) ,
ca_state character(2) ,
ca_zip character(10) ,
ca_country character varying(20) ,
ca_gmt_offset numeric(5,2) ,
ca_location_type character(20)
)
TABLESPACE example1
PARTITION BY RANGE (ca_address_sk)
(
PARTITION P1 VALUES LESS THAN(5000),
PARTITION P2 VALUES LESS THAN(10000),
PARTITION P3 VALUES LESS THAN(15000),
PARTITION P4 VALUES LESS THAN(20000),
PARTITION P5 VALUES LESS THAN(25000),
PARTITION P6 VALUES LESS THAN(30000),
PARTITION P7 VALUES LESS THAN(40000),
PARTITION P8 VALUES LESS THAN(MAXVALUE) TABLESPACE example2
)
ENABLE ROW MOVEMENT;
CREATE TABLE
mogdb=# INSERT INTO mogdb_usr.customer_address_bak SELECT * FROM mogdb_usr.customer_address;
INSERT 0 0
- Alter partitioned table row movement properties
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak DISABLE ROW MOVEMENT;
ALTER TABLE
- Delete a partition
Delete partition P8。
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak DROP PARTITION P8;
ALTER TABLE
- Add a partition
Add partition P8. The range is 40000<= P8<=MAXVALUE.
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak ADD PARTITION P8 VALUES LESS THAN (MAXVALUE);
ALTER TABLE
- Rename a partition
Rename partition P8 as P_9.
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak RENAME PARTITION P8 TO P_9;
ALTER TABLE
Rename partition P_9 as P8.
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak RENAME PARTITION FOR (40000) TO P8;
ALTER TABLE
- Alter the tablespace of partition
Alter the tablespace of partition P6 to example3.
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak MOVE PARTITION P6 TABLESPACE example3;
ALTER TABLE
Alter the tablespace of partition P4 to example4.
mogdb=# ALTER TABLE mogdb_usr.customer_address_bak MOVE PARTITION P4 TABLESPACE example4;
ALTER TABLE
- Query a partition
Query partition P6.
mogdb=# SELECT * FROM mogdb_usr.customer_address_bak PARTITION (P6);
mogdb=# SELECT * FROM mogdb_usr.customer_address_bak PARTITION FOR (35888);
- Delete a partitioned table and tablespace
mogdb=# DROP TABLE mogdb_usr.customer_address_bak;
DROP TABLE
mogdb=# DROP TABLESPACE example1;
mogdb=# DROP TABLESPACE example2;
mogdb=# DROP TABLESPACE example3;
mogdb=# DROP TABLESPACE example4;
DROP TABLESPACE
Column Design
-
Avoid duplication of column names with system tables.
-
Field meanings and data types should be consistent with the program code design.
-
All fields must have comment information added.
-
Do not use character types when you can use numeric types.
-
It is forbidden to store date data in character types.
-
Use timestamptz for time type fields.
-
Try to require not null for fields and provide default values for fields.
-
MogDB new database is compatible with oracle by default, not null constraint does not allow to pass empty string, empty string will be converted to null by default, compatible with PG mode database will not have this problem.
Sequence Design
-
Manual creation of table-related sequences is prohibited and should be specified in the serial/bingserial type way.
-
It is recommended to set the step size of the sequence to 1.
-
It is not recommended to set minvalue and maxvalue.
-
It is not recommended to set cache, the serial number is not consecutive after setting cache.
-
It is prohibited to turn on cycle.
-
Serial should be consistent with the type and range of variable definition in the code to prevent the inability to insert data.
Constraint Design
Primary Key Constraint
-
Each table must include a primary key.
-
It is not recommended that the name of the primary key has the service meaning, such as identification certificate or country name although the name is unique.
-
It is recommended that a primary key is written as
id serial primary key
orid bigserial primary key
. -
It is recommended that the primary key in a large-sized table can be written as follows, which is easy to maintain later.
create table test(id serial not null ); create unique index CONCURRENTLY ON test (id);
Unique Constraint
Apart from the primary key, unique constraint is needed. You can create a unique index with uk_ as the prefix to create unique constraint.
Foreign Key Constraint
- You'd better create foreign key constraints for a table with foreign key relationship.
- The use of foreign keys is not recommended for systems with high performance requirements and security under your control.
- When using the foreign key, you must set the action of the foreign key, such as cascade, set null, or set default.
Non-Empty Column
- All non-null columns must have the not null constraint added
Check Constraint
- For fields with the check constraint, it is required to specify the check rules, such as the gender and status fields.
Index Design
- The number of table indexes for frequent DML operations is not recommended to exceed 5.
- Add concurrently parameter when create/drop index.
- Virtual indexes can be used to determine the validity of indexes before actually creating them.
- Create indexes for fields that frequently appear after the keywords order by, group by, and distinguish.
- Fields that are often used as query selections to create indexes.
- Indexes on attributes that are often used as table joins.
- The number of fields in a composite index is not recommended to exceed 3.
- Composite indexes should have one field that is a common search condition.
- The first field of a composite index should not have a single-field index.
- For tables where data is rarely updated and only a few of the fields are frequently queried, consider using index overrides.
- Do not create indexes on fields that have a large number of identical fetch values.
- It is recommended to use unique index instead of unique constraints for subsequent maintenance.
- It is recommended to build compound indexes with multiple fields for high frequency queries with multiple fields and conditions in where, with reference to the data distribution.
- Useless indexes and duplicate indexes should be deleted to avoid negative impact on the execution plan and database performance.
View Design
-
You'd better use simple views and use less complex views.
Simple view: Data comes from a single table and a simple view does not contain groups of data and functions.
Complex view: Data comes from multiple tables, or a complex view contains groups of data or functions. A complex view can contain three tables at most.
-
You'd better not use nested views. If nested views have to be used, it is advised to have two levels of nesting at most.
Function Design
- A function must retrieve database table records or other database objects, or even modify database information, such as Insert, Delete, Update, Drop, or Create.
- If a function does not relate to a database, it cannot be realized using a database function.
- It is not advised to use DML or DDL statements in a function.
Syntax Specification
About NULL
-
Description:
NULL
judgment:IS NULL
,IS NOT NULL
. -
Description: Beware of
boolean
types taking the valuestrue
,false
,NULL
. -
Description: Beware of
NOT IN
collections withNULL
elements. -
Recommendation: Use
count(1)
orcount(*)
to count rows, but notcount(col)
to count rows, becauseNULL
values will not be counted. -
Rule: When
count(multi-column names)
, the multi-column names must be enclosed in parentheses, e.g.count( (col1,col2,col3) )
. -
Note: With multi-column
count
, the row is counted even if all columns areNULL
, so the effect is the same ascount(*)
. -
Note:
count(distingu col)
counts the number of non-NULL
non-repeats of a column,NULL
is not counted;count(distingu (col1,col2,...) )
counts the unique values of multiple columns,NULL
is counted, whileNULL
andNULL
are considered the same. -
Note: Distinction between count and sum of NULL
select count(1), count(a), sum(a) from (SELECT * FROM (VALUES (NULL), (2) ) v(a)) as foo where a is NULL; count | count | sum -------+-------+----- 1 | 0 | (1 row)
-
Check whether two values are the same (NULL is considered as the same value).
select null is distinct from null; ?column? \--------- f (1 row) select null is distinct from 1; ?column? \--------- t (1 row) select null is not distinct from null; ?column? \--------- t (1 row) select null is not distinct from 1; ?column? \--------- f (1 row)
About Invalid Indexes
-
During SQL statement writing, functions and expressions are usually used in query operations. It is not recommended that functions and expressions are used in condition columns. Using a function or expression in a condition column will make indexes of the condition column unused, thereby affecting the SQL query efficiency. It is recommended that functions or expressions are used in condition values. For example,
select name from tab where id+100>1000;
This statement can be changed to the following:
select name from tab where id>1000-100;
-
Do not use left fuzzy query. For example,
select id from tab where name like '%ly';
-
Do not use the negative query, such as not in/like. For example,
select id from tab where name not in ('ly','ty');
Ensuring That All Variables and Parameters Are Used
- Declare-variable also generates certain system overhead and makes code look loose. If some variables are not used in compilation, they will report alarms. Make sure that no any alarm is reported.
Query Operations
DDL Operation
-
Any DDL operations on existing tables are prohibited during peak business periods
-
All production DDL operations must be verified by the development test environment
-
Concurrently should be used when maintaining indexes
-
pg_repack should be used instead of vacuum full to rebuild the table
-
When adding fields with default values to a large table, it should be split into three parts: adding fields, filling default values and adding non-null constraints, such as breaking
alter table t add column col datatype not null default xxx;
into the following, to avoid too long a table lock caused by fillingdefault
valuesalter table t add column col datatype ; alter table t alter column col set default xxx; update table t set column= DEFAULT where id in ( select id from t where column is null limit 1000 ) ; \watch 3 alter table t alter column col set not null;
DML Operation
-
The SQL statement for updating data is prohibited to appear
where 1=1
-
The amount of data operated by a single DML statement should not exceed 100,000
-
When clearing the data in the table,
truncate
should be used -
For risky operations, you should open the transaction and confirm it before committing.
-
The SQL logic in the transaction should be as simple as possible, and the operation should be submitted in time after execution to avoid
idle in transaction
status. -
Use
copy
instead ofinsert
when importing a large amount of data. -
Consider deleting indexes before importing data, and rebuild them after importing.
DQL Operation
-
Prohibit the use of
select *
, apply the specific required field substitution -
Prohibit the use of
where 1=1
to avoid full table scan or Cartesian product -
The search condition value should be consistent with the field type to prevent not going to the index
-
Fields to the left of the equal sign should be consistent with the index, especially conditional or functional indexes
-
Pay attention to the execution plan of slow SQL, if it is not consistent with the expectation, change it as soon as possible
-
Use
count(*)
orcount(1)
to count rows,count(column)
will not countnull
rows -
Limit the number of
join
, no more than 3 are recommended -
Recursive queries need to be limited to prevent infinite loops
-
For
or
operations, you should useunion all
orunion
instead
Data Import
- When a large amount of data needs to be stored in a table, it is recommended that COPY is used rather than INSERT. This can improve the data write speed.
- Before data is imported, delete related indexes. After the import is complete, recreate indexes. This can improve the data import speed.
Transaction Operation
- Make sure that the SQL logic in a transaction is simple, the granularity of each transaction is small, less resources are locked, lock and deadlock can be avoided, and transaction can be committed in a timely manner after being executed.
- For DDL operations, especially multiple DDL operations, including CRAETE, DROP, and ALTER, do not explicitly start a transaction because the lock mode value is very high and deadlock easily occurs.
- If the state of the master node is idle in transaction, related resources will be locked, thereby leading to lock, even deadlock. If the state of the slave node is idle in transaction, synchronization between the master and slave nodes will be suspended.
Others
- For instances running in SSDs, it is recommended that the value of random_page_cost (default value: 4) is set to a value ranging from 1.0 to 2.0. This can make the query planner preferably use the index to perform scanning.
- In the scenario where EXPLAIN ANALYZE needs to be used to view the actual execution plan and time, if a write query is to be performed, it is strongly recommended that a transaction is started first and then rollback is performed.
- For tables frequently updated and with the data size largely increased, table reorganization should be performed in appropriate time to lower the high water mark.
PostgreSQL Compatibility
Database Creation Specifications
During MogDB database creation, the following PG compatibility mode is used:
create database dbnam DBCOMPATIBILITY='PG' encoding=’utf8’;
Data Type
Value Type
During development and usage, MogDB supports only the smallint, integer, bigint, numeric[(p[,s])], serial, and bigserial value types.
Type | PostgreSQL | MogDB | Storage Length | Remarks |
---|---|---|---|---|
tinyint | / | Supported | 1 byte | 0 to 255 |
smallint | Supported | Supported | 2 bytes | -32,768 to +32,767 |
integer | Supported | Supported | 4 bytes | -2,147,483,648 to +2,147,483,647 |
binary_integer | / | Supported | / | integer alias |
bigint | Supported | Supported | 8 bytes | -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 |
decimal[(p[,s])] | Supported | Supported | Variable byte | A maximum of 131072 before the decimal point and 16383 after the decimal point |
numeric[(p[,s])] | Supported | Supported | Variable byte | A maximum of 131072 before the decimal point and 16383 after the decimal point |
number[(p[,s])] | / | Supported | / | Numeric alias |
real | Supported | Supported | 4 bytes | Accurate to six decimal digits |
float4 | / | Supported | 4 bytes | Accurate to six decimal digits |
double precision | Supported | Supported | 8 bytes | Accurate to fifteen decimal digits |
binary_double | / | Supported | 8 bytes | Double precision alias |
float8 | / | Supported | 8 bytes | Accurate to fifteen decimal digits |
float[(p )] | / | Supported | 4 or 8 bytes | |
dec[(p,[s])] | / | Supported | / | A maximum of 131072 before the decimal point and 16383 after the decimal point |
integer[(p,[s])] | / | Supported | / | A maximum of 131072 before the decimal point and 16383 after the decimal point |
smallserial | Supported | Supported | 2 bytes | 1 to 32,767 |
serial | Supported | Supported | 4 bytes | 1 to 2,147,483,647 |
bigserial | Supported | Supported | 8 bytes | 1 to 9,223,372,036,854,775,807 |
tinyint | / | Supported | 1 byte | 0 to 255 |
Character Type
During the development, MogDB supports only the char(n), varchar(n), and text character types.
Type | PostgreSQL | MogDB | Storage Length | Remarks |
---|---|---|---|---|
char(n) | Supported | Supported | A maximum of 1 GB in postgreSQL A maximum of 10 MB in MogDB |
In postgreSQL, n indicates the number of characters. In MogDB, n indicates the number of bytes. In the compatibility PG mode, n indicates the number of characters. |
nchar(n) | / | Supported | A maximum of 10 MB | n indicates the number of bytes. In the compatibility PG mode, n indicates the number of characters. |
varchar(n) | Supported | Supported | A maximum of 1 GB in postgreSQL A maximum of 10 MB in MogDB |
In postgreSQL, n indicates the number of characters. In MogDB, n indicates the number of bytes. In the compatibility PG mode, n indicates the number of characters. |
varchar2(n) | / | Supported | A maximum of 10 MB | varchar(n) alias |
nvarchar2(n) | / | Supported | A maximum of 10 MB | n indicates the number of characters. |
text | Supported | Supported | 1 GB - 1 | |
clob | / | Supported | 1 GB - 1 | text alias |
Time Type
During the development, MogDB supports only the timestamp[(p )][with time zone] and date time types.
Type | PostgreSQL | MogDB | Storage Length | Remarks |
---|---|---|---|---|
timestamp[(p )][without time zone] | Supported | Supported | 8 bytes | 4713 BC to 294276 AD |
timestamp[(p )][with time zone] | Supported | Supported | 8 bytes | 4713 BC to 294276 AD |
date | Supported | Supported | 4 bytes | 4713 BC to 5874897 AD (The actual storage size is 8 bytes in MogDB) |
time[(p )][without time zone] | Supported | Supported | 8 bytes | 00:00:00 to 24:00:00 |
time[(p )][with time zone] | Supported | Supported | 12 bytes | 00:00:00+1459 to 24:00:00-1459 |
interval[fields][(p )] | Supported | Supported | 16 bytes | -178000000 to 178000000 years |
smalldatetime | / | Supported | 8 bytes | Date and time without timezone, accurating to the minute, 30s equaling one minute |
interval day(1) to second(p ) | / | Supported | 16 bytes | |
reltime | / | Supported | 4 bytes |
JSON Type
MogDB supports only the JSON type.
Type | PostgreSQL | MogDB | Storage Length | Remarks |
---|---|---|---|---|
json | Supported | Supported | / | |
jsonb | Supported | Supported since version 2.1 | / |
Keywords
In the following table, Reserved indicates that keywords in a database are reserved and cannot be customized. Non-reserved or N/A indicates that keywords can be customized.
Keyword | MogDB | PostgreSQL |
---|---|---|
AUTHID | Reserved | N/A |
BUCKETS | Reserved | N/A |
COMPACT | Reserved | N/A |
DATE | Non-reserved (function or type is not supported) | |
DELTAMERGE | Reserved | N/A |
EXCLUDED | Reserved | N/A |
FENCED | Reserved | N/A |
GROUPING | Non-reserved (function or type is not supported) | |
HDFSDIRECTORY | Reserved | N/A |
IS | Reserved | Reserved (function or type is supported) |
ISNULL | Non-reserved | Reserved (function or type is supported) |
LATERAL | Reserved | |
LESS | Reserved | N/A |
MAXVALUE | Reserved | Non-reserved |
MINUS | Reserved | N/A |
MODIFY | Reserved | N/A |
NLSSORT | Reserved | N/A |
NUMBER | Non-reserved (function or type is not supported) | |
PERFORMANCE | Reserved | N/A |
PROCEDURE | Reserved | Non-reserved |
REJECT | Reserved | N/A |
ROWNUM | Reserved | N/A |
SYSDATE | Reserved | N/A |
VERIFY | Reserved | N/A |
Implicit Conversion Comparison Table
Input Type | Target Type | MogDB | PG |
---|---|---|---|
bool | int2, int4, int8 | Supported | int4 is not supported, others are the same |
int2 | bool, text, bpchar, varchar,interval | Supported (except bpchar) | NA |
int4 | bool, int2, text, bpchar, varchar, interval | Supported (except bpchar) | bool is not supported, int2 is in assignment, others are the same |
int8 | bool, text, bpchar, varchar | Supported (except bpchar) | NA |
text | int8, int4, int2, float4, float8, date, timestamp, nemeric | Supported | NA |
float4 | int8, int4, int2, text, bpchar, varchar | Supported (except bpchar) | First three are in assignment, others are the same |
float8 | int8, int4, int2, text, float4, bpchar, varchar, interval, numeric | Supported (except bpchar) | int8, int4, int2, float4, numeric are in assignment, others are the same |
bpchar | int8, int4, date, timestamp, numeric | ||
date | text, bpchar, varchar | Supported (except bpchar) | NA |
timestamp | text, varchar | Supported | NA |
timestamptz | text | Supported | NA |
numeric | int8, int4, int2, text, bpchar, varchar, interval | Supported (except bpchar) | First three are in assignment, others are the same |