HomeMogDBMogDB StackUqbar
v3.0

Documentation:v3.0

Supported Versions:

Other Versions:

pg_bulkload

pg_bulkload Overview

pg_bulkload is a high-speed data loading tool for MogDB. It is faster than the copy command and can skip shared buffer and wal buffer to write data into files directly.


Install pg_bulkload

For details, please refer to gs_install_plugin or gs_install_plugin_local.


Use pg_bulkload

pg_bulkload --help
gsql -p 5432 postgres -r
CREATE EXTENSION pg_bulkload;
create table test_bulkload(id int, name varchar(128));

Create a TXT file and write 100,000 lines of data.

seq 100000| awk '{print $0"|bulkload"}' > bulkload_output.txt

Using Parameters

After the file is successfully created, run the following command:

pg_bulkload -i ./bulkload_output.txt -O test_bulkload -l test_bulkload.log -p 5432 -o "TYPE=csv" -o "DELIMITER=|" -d postgres -U hlv

Connect to the database to check whether the data is imported successfully:

select count(1) from test_bulkload;

Using the Control File

Before using the control file for data import, you need to clear the imported data in the previous table.

Write a .ctl file.

INPUT=/vdb/MogDB-server/dest/bulkload_output.txt
LOGFILE = /vdb/MogDB-server/dest/test_bulkload.log
LIMIT = INFINITE
PARSE_ERRORS = 0
CHECK_CONSTRAINTS = NO
TYPE = CSV
SKIP = 5  (This parameter sets how many lines to skip)
DELIMITER = |
QUOTE = "\""
ESCAPE = "\""
OUTPUT = test_bulkload
MULTI_PROCESS = NO
WRITER = DIRECT
DUPLICATE_ERRORS = 0
ON_DUPLICATE_KEEP = NEW
TRUNCATE = YES

Note: The code logic identifies parameters in the .ctl file with line breaks, so the last line of the .ctl file needs to be a line break to avoid incorrect parameter identification.

Run following command:

pg_bulkload ./lottu.ctl -d postgres -U hlv
Copyright © 2011-2024 www.enmotech.com All rights reserved.