Export

From Oracle FAQ
Jump to: navigation, search

Introduction[edit]

Export and Import are used primarily for the following tasks:

  • data archival
  • upgrading to new releases of Oracle
  • backing up Oracle databases
  • moving data between Oracle databases

Export and Import allow you to accomplish the following tasks:

  • store Oracle data in operating system files independent of any database.
  • store definitions of database objects (such as tables, clusters, and indexes) with or without the data.
  • store inactive or temporary data.
  • back up only tables whose data has changed since the last export, using either an incremental or a cumulative export.
  • restore tables that were accidentally dropped, provided they were exported recently.
  • restore a database by importing from incremental or cumulative exports.
  • selectively back up parts of your database in a way that requires less storage space than a system backup.
  • move data from an older to a newer version of Oracle.
  • move data between Oracle databases.
  • move data between different hardware and operating system environments.
  • move data from one owner to another.
  • move data from one tablespace or schema to another.
  • save space or reduce fragmentation in your database.

The basic concept behind Export is to extract the object definitions and table data from an Oracle database and store them in Oracle-binary format. Object definitions include DDL (Data Definition Language) to create the database objects such as tablespaces, tables, indexes, sequences, users .etc. The resulting Oracle-binary file can be stored on disk or any other auxiliary storage devices. Since the resulting export file is Binary, it cannot be read by any other devices except Oracles Import utility. Export writes export files using the character set specified for the user session; for example, 7-bit ASCII or IBM Code Page 500 (EBCDIC). You can access key words used for exports by typing
exp HELP=y

Required Privileges[edit]

To use export utility CATEXP.SQL or CATALOG.SQL (which in turn call CATEXP.SQL) must be run after the database has been created. CATEXP.SQL has to be run only once when the database is created. In case CATEXP.SQL was not called by CATALOG.SQL (due to CATALOG.SQL not finding the file CATEXP.SQL where it needs), you can run CATEXP.SQL as user SYS. Be careful not to run CATALOG.SQL once again as it might destroy your data dictionaries.

To use export you must have CREATE SESSION privilege on an Oracle database. Once you have CREATE SESSION privilege you can export objects belonging to your schema. To export objects owned by another user, you must have the EXP_FULL_DATABASE privilege. DBA role will be granted this privilege.

Export parameters[edit]

I usually prefer to use parameter file as directive to exports, where in all the directives are written in a flat file and export utility reads directives from the parameter file. Using .par extension for your parameter file is preferable. Parameter files are nothing but text files, so use a text editor (vi, edit, ne.etc.) to create the file. Let us look at a sample parameter file let us say exp.par, you can also name the file junk.par or junk.jk , no restrictions

  • FILE=exp020999.dmp
  • LOG=exp020999.log
  • FULL=Y
  • COMPRESS=Y

After writing the parameter files invoke the export utility
exp username/password parfile=exp.par

The values in the parameter files can be as follows:

  • BUFFER - The parameter BUFFER determines the maximum number of rows in an array fetched by Export.
    Buffer_size = rows_in_array * maximum_row_size
    If a table having a LONG datatype is Exported , or if BUFFER is specified as zero, only one row at a time is fetched.
  • COMPRESS - Specifies how Export will manage the initial extent for the table data. This parameter is helpful during database re-organization. Export the objects (especially tables and indexes) with COMPRESS=Y. If your table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced. Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on your disk (tablespace), and you do not want your imports to bomb.
  • CONSISTENT - If massive updates/rollbacks are taking place when you have kicked off the exports, then the exports will not be consistent under normal conditions. If you set CONSISTENT=Y then a rollback segment is designated to backout the effect of any uncommitted transactions. I.e. a value changed to 100 when you have kicked off the exports and before the exports finishes, the changes being undone. The negative effect are exports being slower cause they have to check the consistency by cross referring the rollback segments.
  • CONSTRAINTS - A flag to indicate whether to export table constraints.
  • DIRECT - Specifying DIRECT=Y causes export to do Direct Path Exports, bypassing the evaluation layer.
  • FEEDBACK - Specify that Export should display a progress meter in the for of a dot for x number of rows exported. For example setting FEEDBACK=10, would display a dot each time 10 rows has been exported. This is done for each table exported.
  • FILE - The name of the export file.
  • FULL - Specifies whether Export should export the entire database or not. This includes all users, there schemas, data dictionaries.
  • GRANTS - Specifies whether grants should be exported or not.
  • INCTYPE - Specifies the type of incremental export. Options are COMPLETE, CUMULATIVE, and INCREMENTAL.
  • INDEXES - Specifies whether indexes should be exported or not.
  • LOG - Specifies a file name to receive information and error messages.
  • OWNER - Specifies a list of usernames whose objects will be exported.
  • RECORD - A flag to indicate whether to record an incremental or cumulative export in the database.
  • RECORDLENGTH- Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.
  • ROWS - Specifies whether the rows of table data should be exported or not.
  • STATISTICS - Specifies the type of database optimizer statistics to generate when the exported data is imported later. Options are ESTIMATE, COMPUTE and NONE
  • TABLES - Specifies the list of tables to export.
  • USERID - Specifies the username/password of the user initiating the export

Tips[edit]

Some Tips While Exporting

  • If transactions continue to access sequence numbers during an export, sequence numbers can be skipped. Sequence numbers can be skipped only when cached sequence numbers are in use (for eg. Setting SEQUENCE_CACHED_ENTRIES=10 in you init.ora file). Let us assume a sequence vtx_seq accessed by you application, and the parameter SEQUENCE_CACHED_ENTRIES is set to 10 in you init.ora file. Now 1-10 sequence numbers are cached in the memory. At this point if you export, then the exported next sequence number for the sequence vtx_seq is 11. So 1-10 sequence numbers cached but unused are lost.
  • If a table is named emp#, and if you parameter file contains the line
    TABLES=(emp#, dept, invoice)
    Then the line succeeding emp# i.e. dept, invoice is treated as a comment. To get around with this condition enclose the table name having # in quotation marks
    TABLES=("emp#", dept, invoice)
  • The maximum file size your parfile is limited to 8K on versions below 8i. So listing the tables one per line will go over 8K fairly quickly. 8i removes this restriction. To get around this condition try to list all table names on a single line.

Export methods[edit]

Exports can be done in two methods:

  • Conventional - In conventional path export uses SQL statement "SELECT * FROM TABLE" to extract data from database tables. Data is then read from disk into a buffer (private buffer or buffer cache) and rows are transferred into evaluation buffer, The data after passing expression evaluation (equivalent Insert statement generated and validated), is transferred to the Export Client which then writes the data into the export file. Some interesting points about conventional exports are:
    • Conventional path export only exports in the user session character set. I.e. any settings done for a session using "ALTER SESSION SET NLS_CHARACTERSET=US7ASCII".
    • If a conventional export was done on Oracle version 7.1 onwards, the format of the export file was changed to include stored procedures, functions, and packages that have comments embedded in them. As a result if you want to import a dump file (7.1 onwards) to a Oracle database lower than 7.1 version a patch has to be applied. All other objects (except stored procedures, functions, and packages) can be imported without a patch.
  • Direct - In direct path export the data is read directly bypassing the evaluation buffer. It also optimizes the use of SELECT * FROM TABLE statement. It is used in conjunction with the database in direct read mode, which makes use of private buffers instead of public buffers and hence less resources are consumed and performance is improved. interesting points about direct exports are:
    • Direct path Export is available after Oracle version 7.3.
    • The Export Parameter BUFFER, used to specify the size of the fetch array, applies only to conventional exports and has "no" effect on direct path exports.
    • The Export parameter RECORDLENGTH can be used to specify the size of the Export I/O Buffer.
    • If you have done a Direct path export of 7.3 release database, and want to build a 7.2 release database using this export, backward compatibility is not possible in Direct path Export. Instead use conventional exports.
    • Direct path export only exports in the database character set. I.e. any settings done for a session has no effect and must be set the same as database character set, else export will terminate with a warning. Check your parameter NLS_CHARACTERSET by logging in as "SYS AS SYSDBA" and issuing "show NLS_CHARACTERSET".
    • Direct path Exports cannot be invoked using interactive mode
    • To reduce contention with other users for database resources during direct path export, you can use direct read mode. Remember, Oracle 7 users needs to enable direct read mode, enter the following line in your init.ora file. Compatible = . where db_version number is 7.1.5 or higher.

Export categories[edit]

Exports can be of three categories:

  • Incremental Exports - An incremental export backs up only the tables that have changed since the last incremental, cumulative, or complete export. For example if one row is added or updated to a table since the last incremental, cumulative, or complete export was done the entire table is exported. Tables which have not been modified are not exported.
    Incremental exports cannot be specified as Read consistent i.e. CONSISTENT=Y
    The activity is tracked in SYS.INCEXP, then updates the table with a new ITIME and EXPID.
    Incremental exports can only be done in full database mode (FULL=Y). Only users having EXP_FULL_DATABASE role can run incremental exports.
    A look at the parameter file incexp.par
    • FILE= incremental020999.dmp
    • LOG= incremental020999.log
    • INCTYPE=INCREMENTAL
  • Cumulative Exports - An cumulative export backs up only the tables that have changed since the last cumulative, or complete export. In essence a cumulative export compresses a number of incremental exports into a single cumulative file. For example let us assume at time "A" a cumulative export was done, at time "B" a record was deleted from table emp_mast , at time "C" an incremental export was done (incremental export pickup table emp_mast, since a record is deleted). At time "D" an record was inserted in emp_dtl, at time "E" an incremental export was done (incremental export pickup table emp_dtl, since a record is inserted). At time "F" when cumulative export is done it picks up table emp_mast and also emp_dtl since these two table have changed since the last cumulative was done. A look at the parameter file cumexp.par
    • FILE= cumulative020999.dmp
    • LOG= cumulative 020999.log
    • INCTYPE=CUMULATIVE
  • Complete Exports - A complete export establishes a base for incremental and cumulative exports. It is equivalent to a full database export, except that it updates the tables that track incremental and cumulative exports. If you do not specify INCTYPE=COMPLETE then the tables that track incremental and cumulative exports are not updated. A look at the parameter file completeexp.par
    • FILE= complete020999.dmp
    • LOG= complete020999.log
    • INCTYPE=COMPLETE

Export modes[edit]

There are three modes in which you can export, all users have at least two modes available by default; a user with EXP_FULL_DATABASE role has three choices, The modes are:

  • Exporting Table - You can export a table, or group of tables in a schema; You can probably write your parameter file (exptables.par) as shown below:
    • FILE=exptables020999.dmp
    • LOG=exptables020999.log
    • TABLES=(table1, table2, table3)
  • Exporting Users - If you desire you can export all objects in a users schema (such as tables, data, grants and indexes). By default a ordinary user can only export his current schema. A user with EXP_FULL_DATABASE privilege can export other users schema. You can typically write a parameter file (expschema.par) to export Your schema as below:
    • FILE=expschema.dmp
    • LOG=expschema.log
  • Exporting Full Database - You have to have EXP_FULL_DATABASE privilege, to carry on this operation. You can typically write a parameter file (expfulldb.par) to export full database as below
    • FULL=Y
    • FILE=expfulldb.dmp
    • LOG=expfulldb.log