You can use the Loader’s IMPORT TABLE command to control the import of application data and/or the corresponding database catalog information for a user’s objects.


The objects imported with an IMPORT TABLE command belong to the user who is logged on to the database instance at the time of the import.

You have either exported the required data with the corresponding EXPORT command or the data is available in the specified data format.


<import_table_command>             ::= IMPORT TABLE <table_name>
                                      [<if_condition>] [<duplicates_clause>]

<catalog_data_instream_table_spec> ::= <catalog_instream_spec>
                                     | <data_instream_table_spec>
                                     | <catalog_instream_spec> <data_instream_table_spec>

IMPORT TABLE <table_name>


Imports the data from a table (database catalog and application data)

IMPORT TABLE <table_name>

Imports a table’s database catalog information belonging to the application

IMPORT TABLE <table_spec>


Imports all the application data of a table


Table name


Selecting data records





CATALOG <instream_ddl_spec>


| <data_instream_columns_spec>


DATA <instream_tableformat_spec>


DATA <instream_columnformat_spec>
  [<import_column_spec> ...
   <import_lobcolumn_spec> ...]


PACKAGE <outstream_csv_spec>


Data stream

<import_column_spec>             ::= column description

<import_lobcolumn_spec>          ::= exporting and importing LONG values

<usage_spec>                     ::= performance improvement in the IMPORT TABLE


Loader Tutorial, IMPORT TABLE



You can select individual data records from a DATA data stream in the formats CSV, FWV or RECORDS and then import only these records to the specified table.

See also:

Selecting Data Records


You use the DUPLICATES clause to specify how to proceed when importing data from a data stream if a row with the same key as the new row already exists in the table. This syntax rule is not evaluated if the data is imported into the database in the PAGES format.


The new line is rejected with an error message.


The new line is not inserted.


The new line overwrites the existing line.

<catalog_instream_spec>, <data_instream_table_spec>

Depending on whether database catalog and/or application data is imported, the following data streams are required for the import:

?     Data stream with database catalog information: CATALOG <instream_ddl_spec>

?     Data stream with the application data  data_instream_table_spec

System table SYSLOADER.TRANSFORMATIONMODEL and additional system tables are filled.

<data_instream_part_spec> ::= DATA <instream_tableformat_spec>

To import the data in the PAGES or RECORDS formats, use syntax variant DATA <instream_tableformat_spec>. We recommend this option if you do not need the data in a readable format and you want to exchange it between MaxDB database instances. All the data is in the specified data stream, including the LONG data.

<data_instream_columns_spec> ::= DATA <instream_columnformat_spec>
[<import_column_spec>... <import_lobcolumn_spec>...] [<usage_spec>]

To import the data in the CSV or FWV [BINARY] formats, use syntax variant DATA <instream_columnformat_spec>. We recommend this option if you need the data in a readable form; for tools from other providers, for example. It is possible to exchange the data between MaxDB database instances, but it takes much longer than if you use the RECORDS or PAGES formats.

If you use the CSV and FWV [BINARY] formats, extra data streams are created for the LONG data during export.

If you want to import LONG values, you need to specify a separate data stream for these column values. You can do this with the import_lobcolumn_spec syntax element.

See also:

Data Stream

Column Specification

Importing and Exporting LONG Values


You can use the syntax rule usage_spec to specify whether data that is available in the CSV or FWV [BINARY] format is to be imported via a mass INSERT (RECORDS USAGE) or via pages (PAGE WITH ...).

When importing sorted application data to a table, you can control the performance via usage_spec. You can use the option of importing by pages (PAGE WITH…) if the target table’s table definition satisfies certain conditions. If you select PAGE WITH ..., the data is entered directly into the pages of the database and no log entries are written. This enables an additional performance improvement compared to the RECORDS USAGE option. However, the table is write-protected until you back up the new application data.

See also:

Performance Improvement in the IMPORT TABLE


You can export system table SYSLOADER.TRANSFORMATIONMODEL.

?     If you specify a data stream by means of PACKAGE <outstream_csv_spec>, all entries of the SYSLOADER.TRANSFORMATIONMODEL table are exported to this data stream. We recommend you use this option to log the import process well.

?     If you do not specify the data stream by means of PACKAGE <outstream_csv_spec>, the system table SYSLOADER.TRANSFORMATIONMODEL is not exported.

See also:

Additional Log Functions (package_spec)


The application data and/or the database catalog are imported into the data_instream_table_spec and catalog_instream_spec data streams.

If necessary, all entries in the table SYSLOADER.TRANSFORMATIONMODEL are exported to the package_spec data stream.


The import cannot be completed successfully. In this case, the Loader logs the cause of the error in the log file.

The values PACKAGEGUID and CLIENTNODE are also written to the log file. The PACKAGEGUID is required if you want to restart the import.

See also:

Importing: Commands


Log File

System Tables