HomeMogDBMogDB StackUqbar
v2.1

Documentation:v2.1

Supported Versions:

Other Versions:

mog_filedump User Guide

Introduction

mog_filedump is a tool for parsing data files ported to MogDB based on the improved compatibility of the pg_filedump tool, which is used to convert MogDB heap/index/control files into user-readable format content. This tool can parse part of the fields in the data columns as needed, and can also dump the data content directly in binary format. The tool can automatically determine the type of the file by the data in the blocks of the file. The -c option must be used to format the pg_control file.


Principle

The implementation steps are divided into three main steps.

  1. Reads a data block from a data file.

  2. Parse the data of the corresponding type with the callback function of the corresponding data type.

  3. Call the output of the corresponding data type function to print the data content.


Enmo's Improvements

  1. Compatibility porting to MogDB.

  2. Fix official bug: parsing bug of data type char.

  3. Fix official bug: In the multi-field scenario, parsing the data file, the data type name will cause a data length mismatch bug.


Installation

Visit MogDB official website download page to download the corresponding version of the toolkit, and put the tool in the bin directory of the MogDB installation path. As shown below, toolkits-xxxxxx.tar.gz is the toolkit that contains mog_filedump.

img


How to Use

mog_filedump [-abcdfhikxy] [-R startblock [endblock]] [-D attrlist] [-S blocksize] [-s segsize] [-n segnumber] file

Valid options for heap and index files are as follows:

options function
-a show absolute path
-b output a range of binary block images
-d output file block content
-D The data type of the table.
Currently supported data types are: bigint, bigserial, bool, charN, date, float, float4, float8, int, json, macaddr, name, oid, real, serial, smallint, smallserial, text, time, timestamp, timetz, uuid, varchar, varcharN, xid, xml, .
'
' means ignore all the following data types, for example, the tuple has 10 columns, -D first three column data types, ~ means that only the first three columns of the table tuple are parsed.
-f Output and parse the content of the data block
-h Display instructions and help information
-i Output and parse item details (including XMIN, XMAX, Block Id, linp Index, Attributes, Size, infomask)
-k Verify the checksum of the data block
-R Parse and output the data file contents for the specified LSN range, e.g. -R startblock [endblock]. If only has startblock and no endblock, only output a single data block content
-s Set segment size
-n Set the number of segments
-S Set data block size
-x Parse and output block items as index item format (included by default)
-y Parse and output block items as heap item format (included by default)

The options available for the control file are as follows:

options function
-c List of directories for parsing control files
-f Output and parse the content of the data block
-S Sets the block size that controls file parsing

You can combine the -i and -f parameters to get more effective data to help operation and maintenance personnel analyze and refer to.


Examples

The test table basically covers the data types contained in mog_filedump.

Here is a use case to show the data parsing function. Please add other parameters according to actual needs.

-- Create table test:
create table test(serial serial, smallserial smallserial, bigserial bigserial, bigint bigint, bool bool, char char(3), date date, float float, float4 float4, float8 float8, int int, json json, macaddr macaddr, name name, oid oid, real real, smallint smallint, text text, time time, timestamp timestamp, timetz timetz, uuid uuid, varchar varchar(20), xid xid, xml xml);

-- Insert data:
insert into test(bigint, bool, char, date, float, float4, float8, int, json, macaddr, name, oid, real, smallint, text, time, timestamp, timetz, uuid, varchar, xid, xml) values(123456789, true, 'abc', '2021-4-02 16:45:00', 3.1415926, 3.1415926, 3.14159269828412, 123456789, '{"a":1, "b":2, "c":3}'::json, '04-6C-59-99-AF-07', 'lvhui', 828243, 3.1415926, 12345, 'text', '2021-04-02 16:48:23', '2021-04-02 16:48:23', '2021-04-02 16:48:23', 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11', 'adsfghkjlzc', '9973::xid', '<title>Book0001</title>');

-- The directory where the data files of the query table test are located. The data directory specified by gs_initdb here is db_p. So the table test data file is in db_p/base/15098/32904
mogdb=# select pg_relation_filepath('test');
base/15098/32904 (1 row)

-- Use the mog_filedump tool to parse the data file content:
./mog_filedump -D serial,smallserial,bigserial,bigint,bool,charN,date,float,float4,float8,int,json,macaddr,name,oid,real,smallint,text,time,timestamp,timetz,uuid,varchar,xid,xml db_p/base/15098/32904

img

Copyright © 2011-2024 www.enmotech.com All rights reserved.