HomeMogDBMogDB StackUqbar
v3.0

Documentation:v3.0

Supported Versions:

MogDB 3.0.1

1. Version Description

MogDB 3.0.1 is a patch version of MogDB 3.0.0, which is released on July 30 of 2022. It mainly involves bug fixes based on MogDB 3.0.0.


2. Modified Defects

2.1 Kernel

  • The database is abnormal in PBE scenarios during dynamic partition pruning.
  • The database is abnormal caused by gstrace_check_dump_usess.
  • The result is incorrect during type conversion of a partition expression.
  • The database is abnormal when async_submit is set to on in CM environment.
  • The statement is incorrect when gs_dump is used to export a table created using a type.
  • An error occurs when on update is processed by gs_dump.
  • A table file cannot be opened when a global temporary table is merged in a new session.
  • opengauss_exporter and MogDB are incompatible.
  • The DDL of a table created based on the type cannot be obtained by the pg_get_tabledef function.
  • An error occurs when the pg_get_tabledef function is used to obtain the partition table tablespace in a non-owner database of the table.
  • An error occurs when the exception_init function in the package is used.
  • Certain data fails to be inserted when log_min_messages is set to debug5.
  • pg_repack fails to be executed when a bloom index is created in a secondary partition table.
  • The test that MySQL is compatible with on update timestamp memcheck fails.
  • The gs_async_submit_sessions_status view does not exist in dbe_perf.
  • The on update timestamp feature becomes invalid when the database is disconnected and then connected.
  • The database is abnormal when the memory usage view is queried.
  • pg_get_tabledef does not generate the on update current_timestamp syntax.
  • The libcjson dependency file is lacked in the lib directory of the tools package.

2.2 CM

  • The AZ-related switchover operations (-a/-A/-z [az_name]) are supported in DCF mode.
  • CM_CTL occasionally failed to set the primary mode of CMS to AUTO.
  • The cm_agent's forced kill mechanism caused the MogDB process in the coredump state to fail to generate the complete corefile file.

2.3 Extensions

  • [whale] The dbms_output package supports serveroutput.
  • [whale] The value of dbms_utility.db_version is inout.
  • [whale] An extension fails to be created when behavior_compat_options is set to proc_outparam_override.
  • [whale] The output value of get_line and get_lines are incorrect.
  • [orafce] The package and view in orafce that conflict with whale are deleted. The package definition is added to the function definition to support function calling in pgplsql.
  • [orafce] utl_file.fopen triggers the database core, and the file name becomes null pointer.
  • [orafce] The default attribute is lost during pg_repack restructuring when a table contains a column with the on update timestamp attribute.
  • [wal2json] The output is incorrect during event deletion when a table contains only unique index.

3. Known Issues

  • In whale, the to_timestamp function cannot convert the date type to the timestamp type.

    Temporary solution: Add the to_timestamp function with the input parameter of timestamp without time zone and the return value and type the same as those of the input parameter.

    CREATE or replace FUNCTION pg_catalog.to_timestamp(timestamp without time zone)     
    RETURNS timestamp without time zone     
    IMMUTABLE     
    AS $$       
    select $1;     
    $$ LANGUAGE sql;
  • The output of the for reverse.. loop stored procedure is inconsistent with that in Oracle.

    Temporary solution:

    1. If MTK is used to migrate the original app code, the data type can be converted.
    2. For new code, you need to manually switch the position of the start and end values.
  • When wal_level is set to hot_standby and recovery-target-time specified by gs_probackup is later than the last backup set time, PITR will fail.

  • In the scenario where query_dop is set to 1 which is not the default value, when millions of data is queried, "ERROR: bogus varno: 65001" will be reported.

  • The tuple in a compression table is not stored by specified order, the tuple sequence is not the expected one after running the cluster command (the data is not lost, but the data is stored in a different order).

Copyright © 2011-2024 www.enmotech.com All rights reserved.