EXP / IMP
To improve the performance of the import see below site:-
http://www.dba-oracle.com/oracle_tips_load_speed.htm
Query to monitor Import process:-
select
substr(sql_text,instr(sql_text,'into "'),30) table_name,
rows_processed, round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
from
sys.v_$sqlarea
where
lower(sql_text) like 'insert %into "%' and command_type = 2 and open_versions > 0
Exp Parameters
BUFFER
Default Value:- OS dependent
This setting sizes the array fetch buffer used by EXP. If you divide the BUFFER parameter by the maximum row size for a given table, this will tell you how many rows EXP will fetch at a time from that table. Larger array sizes improve
performance. I have found 100 rows to be an all around 'good' array size.
Note that some tables, in particular tables with LONG or LOB columns, are fetched one row at a time regardless of the buffer setting. You should ensure the buffer is set large enough for your largest LONG column.
COMPRESS
Default :- Y
This parameter does not compress the contents of the exported data. It controls how the STORAGE clause for exported objects will be generated. If left as Y, the storage clause for objects will have an initial extent that is equal to the sum of its current extents. That is, EXP will generate a CREATE statement that attempts to fit the object into one single extent. I recommend compress = N and the use of locally managed tablespaces.
ROWS
Default :- Y
Tells EXP whether to export the rows of data within exported tables or just the structure. I use this frequently with a setting of N just to do structural exports.
FILESIZE 0 If set to a positive value, sets the maximum size of DMP file that export will create. Used when exporting more then two gigabytes of data. See Large Exports later for details.
QUERY
Default :- N/A
Allows you to associate a WHERE clause to the tables being exported. This WHERE clause will be applied to the rows during a table level export, and only rows that satisfy the WHERE clause will be exported. This lets you export a 'slice' of a table.
See the section Subsetting Data for an example.
FULL
Default :- N
If set to Y, export would perform a full database export. This will extract all users, tablespace definitions, system grants, and everything from the database.
OWNER N/A This lets you specify a list of schemas to export. Useful for cloning a schema or to 'rename' a user. TABLES N/A Allows you to specify a list of tables to be exported.
PARFILE
Default :- N/A
Specifies the name of a parameter file that contains the parameter_name = values. Can be used in place of specifying the parameters on the command line. Most useful
with long lists of tables to export or when specifying a query.
CONSISTENT
Default :- N
Specifies if the export should be done in a read-only transaction. This would ensure cross table consistency. If you recall from Chapter 3 on Locking and Concurrency in, each individual query is executed in a read consistent fashion. A read-only transaction (or an isolation level of serializable) extends that read consistency to the transaction level. If you are exporting tables that are linked via declarative RI
(Referential Integrity), or nested table objects and need to be able to import them together later, using consistent = Y would be advised. This is especially true if the tables are likely to be modified as the export takes place.
TRANSPORT_TABLESPACE
Default :- N
Specifies whether EXP will be used to export the meta data for a transportable tablespace set or not. See the section on Transporting Data for details.
TABLESPACES NA Used with Transport_tablespace to list the tablespaces being transported.
IMP Parameters
SHOW
Default :- N
If set to Y, import will SHOW you what it would have done; it will not actually do it. With SHOW = Y, no data will be added and no objects created. IGNORE N When set to Y, IMP will ignore most object creation errors. Useful when you have pre-created the objects in the database and wish to use IMP just to populate the tables with data.
INDEXFILE
Default :- N/A
If specified, IMP will import all CREATE INDEX commands, and many other DDL statements, to the specified file (with REMs – comments – in front of them). No other objects are processed in the DMP file, only the index file is created.
FROMUSER
Default :- N/A
If set, this specifies a list of users to import from the DMP file. You can use this to restore a single schema from a full database export.
TOUSER
Default :- N/A
If set, this will import the objects from the user specified in the FROMUSER parameter, into the user specified by the TOUSER parameter. This allows you to 'clone' a user.
COMMIT
Default :- N
Specifies whether IMP will commit after each array insert. This is controlled by the BUFFER parameter. Normally, IMP will COMMIT after it loads a table fully. Since an insert generates the least amount of rollback possible, and because COMMITting frequently will slow down the insertion of data,as well as it causing generation of more redo logs, an IMP cannot be restarted after a failure. I recommend a setting of N for this parameter.
TTS_OWNERS
Default :- N/A
Used with TRANSPORTABLE_TABLESPACES, this will list the owners of objects in the transportable tablespace.
Large Exports
When EXP is used to write to a device that supports 'seeking', as a normal file does, it is limited in the size of the file it can generate. EXP uses the normal OS file APIs, which, on a 32-bit OS, limits the size of the file to 2GB. I know four solutions to this issue (although there are probably others), and we'll take
a look at each of these.
Use the FILESIZE Parameter
This option first became available with Oracle 8i. Using the FILESIZE parameter we can set the maximum size (in bytes) of the DMP files that constitute our export, and EXP will create as many DMP files as necessary in order to export the data. For example, in order to export to a series of files that should be no larger then 500 MB apiece we can use:
exp userid=tkyte/tkyte file = f1,f2,f3,f4,f5 filesize = 500m owner = scott
This would create DMP files f1.dmp, f2.dmp, and so on, each of which would be up to 500 MB in size. If the total size of the export were less then 2GB, EXP would not need to create the f5.dmp file.
Export Smaller Pieces
This solves the problem by avoiding it altogether. If you have a 10 GB database with 50 application schemas, and each schema is under 2 GB in size, you can use user-level exports. You'll end up with 50 export files, each of which contain an application schema.
Export to an OS Pipe
This is a solution that works very well on UNIX. To date, I have not found a way to do this in the Windows environment. In this case I use the command mknod to create a named pipe. A named pipe is device by which one process can write into the pipe and another process can read out from the other end. EXP can write an unlimited amount of data to pipes, since they do not support 'seeking'. Further, the process that reads off of the other end of the pipe could be a compression routine. Using this, we
can simultaneously export the data and compress it. In the event that the compressed file is still larger than 2 GB in size, we can use the split utility to break the file up into smaller pieces. Below is the commented script I use to do this on Unix. This script also shows how to import this compressed split data as well, since immediately after doing an export I run a full import with SHOW = Y to test the
integrity of the DMP file I've just created:
#!/bin/csh -f
# Set this to the userid you want to perform the export as I always use OPS$ (os
# authenticated) accounts for all jobs that will be run in the background. In that
# way a password never appears in a script file or in the ps output.
setenv UID /
# This is the name of the export file. SPLIT will use this to name the pieces of
# the compressed DMP file.
setenv FN exp.`date +%j_%Y`.dmp
# This is the name of the named pipe we will use.
setenv PIPE /tmp/exp_tmp_ora8i.dmp
# Here I limit the size of the compressed files to 500 MG each. Anything less
# than 2 GB would be fine.
setenv MAXSIZE 500m
# This is what we are going to export. By default I am doing a full database
# export.
setenv EXPORT_WHAT "full=y COMPRESS=n"
# This is where the export will go to.
cd /nfs/atc-netapp1/expbkup_ora8i
# Clear out the last export.
rm expbkup.log export.test exp.*.dmp* $PIPE
# Create the named pipe.
mknod $PIPE p
# Write the datetime to the log file.
date > expbkup.log
# Start a gzip process in the background. Gzip will read the pipe and put the
# compressed data out to split. Split will then create 500 MB files out of the
# input data adding .aa, .ab, .ac, .ad, ... file extensions to the template name
# found in $FN.
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
# Now, start up export. The Gzip above is waiting for export to start filling the
# pipe up.
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log
# Now the export is done, this is how to IMP. We need to sort the filenames and
# then simply cat their contents into gunzip. We write that into the pipe. IMP
# will then read that pipe and write what it would do to stderr. The >>& in the
# csh redirects both stdout and stderr for us.
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
imp userid=$UID file=$PIPE show=y full=y >>& export.test
date >> export.test
# Clean up the pipe, we don't need it anymore.
rm -f $PIPE
If you are using UNIX, I personally feel that the above is a better approach than using FILESIZE = with multiple file names specified on the command line for two reasons. Firstly, it allows us to compress the data before storing it and secondly, it will not prompt us for a filename as EXP might.
Export to a Device that does not Support Seeking
This again is a UNIX-only solution. You can export directly to a tape device just by specifying the name of the device. For example:
exp userid=tkyte/tkyte file=/dev/rmt/0 volsize = 6000m full = y
This will export directly to tape, stopping every 6000 MB to let us change the tapes if needed.
Transporting Data
A transportable tablespace is a mechanism for taking the formatted datafiles of one database, and attaching them to another. Instead of unloading the data from one database to a flat file, or a DMP file, and then inserting that data into the other database, transporting a tablespace lets you move the data as fast as you can copy files.
There are some restrictions on transporting tablespaces, namely:
a) The source and target databases must be running on the same hardware platforms – You cannot take the datafiles from Windows NT and transfer them to HP/UX, for example. A DMP file can be copied from OS to OS – the DATAFILES of a database cannot be; the datafiles are not OS independent like a DMP file is.
b) The source and target databases must be using the same character set – You cannot take a set of files from a database with a WE8ISO8859P1 character set and attach them to a UTF8 instance, for example.
c) The source database must not have a tablespace by the same name – The tablespace name from the originating database will be used. If the source database already has a tablespace by that name, Oracle cannot attach another with that name.
d) The source and target database must have the same block sizes – You cannot attach the files from a 4 KB blocksize database to an 8 KB blocksize database.
e) You must transport a self-contained set of objects – For example, you cannot transport a tablespace that contains an index without also transporting the tablespace that contains the table, which the index is on.
f) There are some objects that cannot be transported – These include snapshots/materialized views, function-based indexes, domain indexes (such as those produced by interMedia), scoped refs, and advanced queues with more then one recipient.
g) The source database must set the tablespace to be transported in READ ONLY mode for a short period of time – This is the period of time it takes to export the tablespace meta data and copy the datafiles elsewhere.
h) SYS owned objects cannot be transported – If a tablespace contains any object own by SYS,the transport will fail. This means that objects like rollback segments, the system tablespace, and so on cannot be transported (which is reasonable, as there would be no reason to transport those objects in any case).
The following example works through all of the steps involved in transporting a tablespace. Just to make it interesting, I'm using two tablespaces. I'll begin by setting up the tablespaces, tables, and creating a new user for the example:
SQL> create tablespace tts_ex1
datafile 'c:\oracle\oradata\tkyte816\tts_ex1.dbf' size 1m
extent management local uniform size 64k;
Tablespace created.
SQL> create tablespace tts_ex2
datafile 'c:\oracle\oradata\tkyte816\tts_ex2.dbf' size 1m
extent management local uniform size 64k;
Tablespace created.
SQL> create user tts_user identified by tts_user
default tablespace tts_ex1
temporary tablespace temp;
User created.
SQL> grant dba to tts_user;
Grant succeeded.
SQL> connect tts_user/tts_user
Connected.
SQL> create table emp as select * from scott.emp;
Table created.
SQL> create table dept as select * from scott.dept;
Table created.
SQL> create index emp_idx on emp(empno) tablespace tts_ex2;
Index created.
SQL> create index dept_idx on dept(deptno) tablespace tts_ex2;
Index created.
SQL> select object_type, object_name,
decode(status,'INVALID','*','') status,
tablespace_name
from user_objects a, user_segments b
where a.object_name = b.segment_name (+)
order by object_type, object_name
/
OBJECT_TYPE OBJECT_NAME S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
INDEX DEPT_IDX TTS_EX2
EMP_IDX TTS_EX2
TABLE DEPT TTS_EX1
EMP TTS_EX1
Prior to attempting to export, we need to ensure we have a self-contained set of objects for transport. We can transport a table without its indexes, but we cannot transport an index without its tables. The following shows the routine that we should use to check if a tablespace, or set of tablespaces, are self-contained:
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex1', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex2', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
VIOLATIONS
-------------------------------------------------------------------------
Index TTS_USER.EMP_IDX in tablespace TTS_EX2 points to table TTS_USER.EMP in
tablespace TTS_EX1
Index TTS_USER.DEPT_IDX in tablespace TTS_EX2 points to table TTS_USER.DEPT in
tablespace TTS_EX1
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex1, tts_ex2', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected
This shows that we can transport TTS_EX1, because it only contains table data and is self-contained. Any attempt to transport TTS_EX2, however, would fail since it contains indexes, but not the tables on which they are based. Lastly, both the tablespaces, TTS_EX1 and TTS_EX2 may be transported together, as we would be transporting both the tables and the indexes. SYS.DBMS_TTS is executable by any DBA (they have EXECUTE ANY PROCEDURE) or any user with the EXECUTE_CATALOG_ROLE. It populates a dynamic table with any errors that would occur if we attempted to transport the tablespace(s). Now we are ready to 'detach' or transport these tablespace. We begin by making them READ ONLY:
SQL> alter tablespace tts_ex1 read only;
Tablespace altered.
SQL> alter tablespace tts_ex2 read only;
Tablespace altered.
We then issue the EXP command:
SQL> host exp userid="""sys/change_on_install as sysdba"""
transport_tablespace=y tablespaces=(tts_ex1,tts_ex2)
Export: Release 8.1.6.0.0 - Production on Mon Mar 19 19:26:26 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TTS_EX1 ...
. exporting cluster definitions
. exporting table definitions
. . exporting table EMP
. . exporting table DEPT
For tablespace TTS_EX2 ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
Note the need for the triple, double quotes in order to specify the userid on the command line. On UNIX, we would have to escape the / as well. If we want to avoid that, we can just let EXP prompt us for the username. Also, note that we used as SYSDBA. Only a SYSDBA (internal) account may perform a transport in Oracle 8.1.6 and later versions. In Oracle 8.1.5 the DBA role was sufficient. (Note that this
command has to be put into SQL*PLUS on one line. However, it is shown on two lines in the example below.)
Now all we need to do is copy the datafiles to another location. This could have been done in parallel with the export above to reduce the amount of time spent in read-only mode:
SQL> host XCOPY c:\oracle\oradata\tkyte816\tts_ex?.dbf c:\temp
C:\oracle\oradata\tkyte816\TTS_EX1.DBF
C:\oracle\oradata\tkyte816\TTS_EX2.DBF
2 File(s) copied
SQL> alter tablespace tts_ex1 read write;
Tablespace altered.
SQL> alter tablespace tts_ex2 read write;
Tablespace altered.
The tablespace is available for READs and WRITEs. Now, we can take this set of files to another database and attach them:
C:\exp> imp file=expdat.dmp userid="""sys/manager as sysdba"""
transport_tablespace=y
"datafiles=(c:\temp\tts_ex1.dbf,c:\temp\tts_ex2.dbf)"
Import: Release 8.1.6.0.0 - Production on Mon Mar 19 19:26:39 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production
Export file created by EXPORT:V08.01.06 via conventional path
About to import transportable tablespace(s) metadata...
SQL> update emp set ename=lower(ename);
update emp set ename=lower(ename)
*
ERROR at line 1:
ORA-00372: file 9 cannot be modified at this time
ORA-01110: data file 9: 'C:\TEMP\TTS_EX1.DBF'
SQL> alter tablespace tts_ex1 read write;
Tablespace altered.
SQL> alter tablespace tts_ex2 read write;
Tablespace altered.
SQL> update emp set ename=lower(ename);
14 rows updated.
And that's it; the files are attached to the database. The final step shows they get attached in a READ ONLY mode (this makes sense, as they were read-only when we transported them). We might need to alter them after attachment. If you would like to test this out on a single database, you could execute these commands, or their equivalent, on your database, after you make the tablespace READ WRITE
again, but before you perform the IMP:
SQL> drop tablespace tts_ex1 including contents;
Tablespace dropped.
SQL> drop tablespace tts_ex2 including contents;
Tablespace dropped.
SQL> host erase c:\oracle\oradata\tkyte816\tts_ex?.db
Getting the DDL
This is a nice side effect of EXP. We can use it to generate much of the DDL from our database. We've seen how this works in Chapter 6, Tables, where I used EXP and IMP to see a more verbose CREATE TABLE statement.
There are two ways to get the DDL: SHOW = Y and INDEXFILE = filename. I always recommend using the INDEXFILE option and never the SHOW = Y option. The latter is designed to show us what EXP would do if it was actually executed. The format of the output it produces is unusable as it is – it tends to wrap the DDL statements at awkward places and adds double quotes. Additionally, there is no clear
delineation between the individual commands themselves. SHOW = Y is OK as a last ditch effort to extract some DDL, if that is all you have.
I can now run EXP and IMP as follows:
C:\>exp userid=tkyte/tkyte owner=tkyte
C:\>imp userid=tkyte/tkyte full=y indexfile=tkyte.sql
http://www.dba-oracle.com/oracle_tips_load_speed.htm
Query to monitor Import process:-
select
substr(sql_text,instr(sql_text,'into "'),30) table_name,
rows_processed, round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
from
sys.v_$sqlarea
where
lower(sql_text) like 'insert %into "%' and command_type = 2 and open_versions > 0
Exp Parameters
BUFFER
Default Value:- OS dependent
This setting sizes the array fetch buffer used by EXP. If you divide the BUFFER parameter by the maximum row size for a given table, this will tell you how many rows EXP will fetch at a time from that table. Larger array sizes improve
performance. I have found 100 rows to be an all around 'good' array size.
Note that some tables, in particular tables with LONG or LOB columns, are fetched one row at a time regardless of the buffer setting. You should ensure the buffer is set large enough for your largest LONG column.
COMPRESS
Default :- Y
This parameter does not compress the contents of the exported data. It controls how the STORAGE clause for exported objects will be generated. If left as Y, the storage clause for objects will have an initial extent that is equal to the sum of its current extents. That is, EXP will generate a CREATE statement that attempts to fit the object into one single extent. I recommend compress = N and the use of locally managed tablespaces.
ROWS
Default :- Y
Tells EXP whether to export the rows of data within exported tables or just the structure. I use this frequently with a setting of N just to do structural exports.
FILESIZE 0 If set to a positive value, sets the maximum size of DMP file that export will create. Used when exporting more then two gigabytes of data. See Large Exports later for details.
QUERY
Default :- N/A
Allows you to associate a WHERE clause to the tables being exported. This WHERE clause will be applied to the rows during a table level export, and only rows that satisfy the WHERE clause will be exported. This lets you export a 'slice' of a table.
See the section Subsetting Data for an example.
FULL
Default :- N
If set to Y, export would perform a full database export. This will extract all users, tablespace definitions, system grants, and everything from the database.
OWNER N/A This lets you specify a list of schemas to export. Useful for cloning a schema or to 'rename' a user. TABLES N/A Allows you to specify a list of tables to be exported.
PARFILE
Default :- N/A
Specifies the name of a parameter file that contains the parameter_name = values. Can be used in place of specifying the parameters on the command line. Most useful
with long lists of tables to export or when specifying a query.
CONSISTENT
Default :- N
Specifies if the export should be done in a read-only transaction. This would ensure cross table consistency. If you recall from Chapter 3 on Locking and Concurrency in, each individual query is executed in a read consistent fashion. A read-only transaction (or an isolation level of serializable) extends that read consistency to the transaction level. If you are exporting tables that are linked via declarative RI
(Referential Integrity), or nested table objects and need to be able to import them together later, using consistent = Y would be advised. This is especially true if the tables are likely to be modified as the export takes place.
TRANSPORT_TABLESPACE
Default :- N
Specifies whether EXP will be used to export the meta data for a transportable tablespace set or not. See the section on Transporting Data for details.
TABLESPACES NA Used with Transport_tablespace to list the tablespaces being transported.
IMP Parameters
SHOW
Default :- N
If set to Y, import will SHOW you what it would have done; it will not actually do it. With SHOW = Y, no data will be added and no objects created. IGNORE N When set to Y, IMP will ignore most object creation errors. Useful when you have pre-created the objects in the database and wish to use IMP just to populate the tables with data.
INDEXFILE
Default :- N/A
If specified, IMP will import all CREATE INDEX commands, and many other DDL statements, to the specified file (with REMs – comments – in front of them). No other objects are processed in the DMP file, only the index file is created.
FROMUSER
Default :- N/A
If set, this specifies a list of users to import from the DMP file. You can use this to restore a single schema from a full database export.
TOUSER
Default :- N/A
If set, this will import the objects from the user specified in the FROMUSER parameter, into the user specified by the TOUSER parameter. This allows you to 'clone' a user.
COMMIT
Default :- N
Specifies whether IMP will commit after each array insert. This is controlled by the BUFFER parameter. Normally, IMP will COMMIT after it loads a table fully. Since an insert generates the least amount of rollback possible, and because COMMITting frequently will slow down the insertion of data,as well as it causing generation of more redo logs, an IMP cannot be restarted after a failure. I recommend a setting of N for this parameter.
TTS_OWNERS
Default :- N/A
Used with TRANSPORTABLE_TABLESPACES, this will list the owners of objects in the transportable tablespace.
Large Exports
When EXP is used to write to a device that supports 'seeking', as a normal file does, it is limited in the size of the file it can generate. EXP uses the normal OS file APIs, which, on a 32-bit OS, limits the size of the file to 2GB. I know four solutions to this issue (although there are probably others), and we'll take
a look at each of these.
Use the FILESIZE Parameter
This option first became available with Oracle 8i. Using the FILESIZE parameter we can set the maximum size (in bytes) of the DMP files that constitute our export, and EXP will create as many DMP files as necessary in order to export the data. For example, in order to export to a series of files that should be no larger then 500 MB apiece we can use:
exp userid=tkyte/tkyte file = f1,f2,f3,f4,f5 filesize = 500m owner = scott
This would create DMP files f1.dmp, f2.dmp, and so on, each of which would be up to 500 MB in size. If the total size of the export were less then 2GB, EXP would not need to create the f5.dmp file.
Export Smaller Pieces
This solves the problem by avoiding it altogether. If you have a 10 GB database with 50 application schemas, and each schema is under 2 GB in size, you can use user-level exports. You'll end up with 50 export files, each of which contain an application schema.
Export to an OS Pipe
This is a solution that works very well on UNIX. To date, I have not found a way to do this in the Windows environment. In this case I use the command mknod to create a named pipe. A named pipe is device by which one process can write into the pipe and another process can read out from the other end. EXP can write an unlimited amount of data to pipes, since they do not support 'seeking'. Further, the process that reads off of the other end of the pipe could be a compression routine. Using this, we
can simultaneously export the data and compress it. In the event that the compressed file is still larger than 2 GB in size, we can use the split utility to break the file up into smaller pieces. Below is the commented script I use to do this on Unix. This script also shows how to import this compressed split data as well, since immediately after doing an export I run a full import with SHOW = Y to test the
integrity of the DMP file I've just created:
#!/bin/csh -f
# Set this to the userid you want to perform the export as I always use OPS$ (os
# authenticated) accounts for all jobs that will be run in the background. In that
# way a password never appears in a script file or in the ps output.
setenv UID /
# This is the name of the export file. SPLIT will use this to name the pieces of
# the compressed DMP file.
setenv FN exp.`date +%j_%Y`.dmp
# This is the name of the named pipe we will use.
setenv PIPE /tmp/exp_tmp_ora8i.dmp
# Here I limit the size of the compressed files to 500 MG each. Anything less
# than 2 GB would be fine.
setenv MAXSIZE 500m
# This is what we are going to export. By default I am doing a full database
# export.
setenv EXPORT_WHAT "full=y COMPRESS=n"
# This is where the export will go to.
cd /nfs/atc-netapp1/expbkup_ora8i
# Clear out the last export.
rm expbkup.log export.test exp.*.dmp* $PIPE
# Create the named pipe.
mknod $PIPE p
# Write the datetime to the log file.
date > expbkup.log
# Start a gzip process in the background. Gzip will read the pipe and put the
# compressed data out to split. Split will then create 500 MB files out of the
# input data adding .aa, .ab, .ac, .ad, ... file extensions to the template name
# found in $FN.
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
# Now, start up export. The Gzip above is waiting for export to start filling the
# pipe up.
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log
# Now the export is done, this is how to IMP. We need to sort the filenames and
# then simply cat their contents into gunzip. We write that into the pipe. IMP
# will then read that pipe and write what it would do to stderr. The >>& in the
# csh redirects both stdout and stderr for us.
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
imp userid=$UID file=$PIPE show=y full=y >>& export.test
date >> export.test
# Clean up the pipe, we don't need it anymore.
rm -f $PIPE
If you are using UNIX, I personally feel that the above is a better approach than using FILESIZE = with multiple file names specified on the command line for two reasons. Firstly, it allows us to compress the data before storing it and secondly, it will not prompt us for a filename as EXP might.
Export to a Device that does not Support Seeking
This again is a UNIX-only solution. You can export directly to a tape device just by specifying the name of the device. For example:
exp userid=tkyte/tkyte file=/dev/rmt/0 volsize = 6000m full = y
This will export directly to tape, stopping every 6000 MB to let us change the tapes if needed.
Transporting Data
A transportable tablespace is a mechanism for taking the formatted datafiles of one database, and attaching them to another. Instead of unloading the data from one database to a flat file, or a DMP file, and then inserting that data into the other database, transporting a tablespace lets you move the data as fast as you can copy files.
There are some restrictions on transporting tablespaces, namely:
a) The source and target databases must be running on the same hardware platforms – You cannot take the datafiles from Windows NT and transfer them to HP/UX, for example. A DMP file can be copied from OS to OS – the DATAFILES of a database cannot be; the datafiles are not OS independent like a DMP file is.
b) The source and target databases must be using the same character set – You cannot take a set of files from a database with a WE8ISO8859P1 character set and attach them to a UTF8 instance, for example.
c) The source database must not have a tablespace by the same name – The tablespace name from the originating database will be used. If the source database already has a tablespace by that name, Oracle cannot attach another with that name.
d) The source and target database must have the same block sizes – You cannot attach the files from a 4 KB blocksize database to an 8 KB blocksize database.
e) You must transport a self-contained set of objects – For example, you cannot transport a tablespace that contains an index without also transporting the tablespace that contains the table, which the index is on.
f) There are some objects that cannot be transported – These include snapshots/materialized views, function-based indexes, domain indexes (such as those produced by interMedia), scoped refs, and advanced queues with more then one recipient.
g) The source database must set the tablespace to be transported in READ ONLY mode for a short period of time – This is the period of time it takes to export the tablespace meta data and copy the datafiles elsewhere.
h) SYS owned objects cannot be transported – If a tablespace contains any object own by SYS,the transport will fail. This means that objects like rollback segments, the system tablespace, and so on cannot be transported (which is reasonable, as there would be no reason to transport those objects in any case).
The following example works through all of the steps involved in transporting a tablespace. Just to make it interesting, I'm using two tablespaces. I'll begin by setting up the tablespaces, tables, and creating a new user for the example:
SQL> create tablespace tts_ex1
datafile 'c:\oracle\oradata\tkyte816\tts_ex1.dbf' size 1m
extent management local uniform size 64k;
Tablespace created.
SQL> create tablespace tts_ex2
datafile 'c:\oracle\oradata\tkyte816\tts_ex2.dbf' size 1m
extent management local uniform size 64k;
Tablespace created.
SQL> create user tts_user identified by tts_user
default tablespace tts_ex1
temporary tablespace temp;
User created.
SQL> grant dba to tts_user;
Grant succeeded.
SQL> connect tts_user/tts_user
Connected.
SQL> create table emp as select * from scott.emp;
Table created.
SQL> create table dept as select * from scott.dept;
Table created.
SQL> create index emp_idx on emp(empno) tablespace tts_ex2;
Index created.
SQL> create index dept_idx on dept(deptno) tablespace tts_ex2;
Index created.
SQL> select object_type, object_name,
decode(status,'INVALID','*','') status,
tablespace_name
from user_objects a, user_segments b
where a.object_name = b.segment_name (+)
order by object_type, object_name
/
OBJECT_TYPE OBJECT_NAME S TABLESPACE_NAME
------------ ------------------------------ - ------------------------------
INDEX DEPT_IDX TTS_EX2
EMP_IDX TTS_EX2
TABLE DEPT TTS_EX1
EMP TTS_EX1
Prior to attempting to export, we need to ensure we have a self-contained set of objects for transport. We can transport a table without its indexes, but we cannot transport an index without its tables. The following shows the routine that we should use to check if a tablespace, or set of tablespaces, are self-contained:
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex1', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex2', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
VIOLATIONS
-------------------------------------------------------------------------
Index TTS_USER.EMP_IDX in tablespace TTS_EX2 points to table TTS_USER.EMP in
tablespace TTS_EX1
Index TTS_USER.DEPT_IDX in tablespace TTS_EX2 points to table TTS_USER.DEPT in
tablespace TTS_EX1
SQL> exec sys.dbms_tts.transport_set_check( 'tts_ex1, tts_ex2', TRUE );
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected
This shows that we can transport TTS_EX1, because it only contains table data and is self-contained. Any attempt to transport TTS_EX2, however, would fail since it contains indexes, but not the tables on which they are based. Lastly, both the tablespaces, TTS_EX1 and TTS_EX2 may be transported together, as we would be transporting both the tables and the indexes. SYS.DBMS_TTS is executable by any DBA (they have EXECUTE ANY PROCEDURE) or any user with the EXECUTE_CATALOG_ROLE. It populates a dynamic table with any errors that would occur if we attempted to transport the tablespace(s). Now we are ready to 'detach' or transport these tablespace. We begin by making them READ ONLY:
SQL> alter tablespace tts_ex1 read only;
Tablespace altered.
SQL> alter tablespace tts_ex2 read only;
Tablespace altered.
We then issue the EXP command:
SQL> host exp userid="""sys/change_on_install as sysdba"""
transport_tablespace=y tablespaces=(tts_ex1,tts_ex2)
Export: Release 8.1.6.0.0 - Production on Mon Mar 19 19:26:26 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TTS_EX1 ...
. exporting cluster definitions
. exporting table definitions
. . exporting table EMP
. . exporting table DEPT
For tablespace TTS_EX2 ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
Note the need for the triple, double quotes in order to specify the userid on the command line. On UNIX, we would have to escape the / as well. If we want to avoid that, we can just let EXP prompt us for the username. Also, note that we used as SYSDBA. Only a SYSDBA (internal) account may perform a transport in Oracle 8.1.6 and later versions. In Oracle 8.1.5 the DBA role was sufficient. (Note that this
command has to be put into SQL*PLUS on one line. However, it is shown on two lines in the example below.)
Now all we need to do is copy the datafiles to another location. This could have been done in parallel with the export above to reduce the amount of time spent in read-only mode:
SQL> host XCOPY c:\oracle\oradata\tkyte816\tts_ex?.dbf c:\temp
C:\oracle\oradata\tkyte816\TTS_EX1.DBF
C:\oracle\oradata\tkyte816\TTS_EX2.DBF
2 File(s) copied
SQL> alter tablespace tts_ex1 read write;
Tablespace altered.
SQL> alter tablespace tts_ex2 read write;
Tablespace altered.
The tablespace is available for READs and WRITEs. Now, we can take this set of files to another database and attach them:
C:\exp> imp file=expdat.dmp userid="""sys/manager as sysdba"""
transport_tablespace=y
"datafiles=(c:\temp\tts_ex1.dbf,c:\temp\tts_ex2.dbf)"
Import: Release 8.1.6.0.0 - Production on Mon Mar 19 19:26:39 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production
Export file created by EXPORT:V08.01.06 via conventional path
About to import transportable tablespace(s) metadata...
SQL> update emp set ename=lower(ename);
update emp set ename=lower(ename)
*
ERROR at line 1:
ORA-00372: file 9 cannot be modified at this time
ORA-01110: data file 9: 'C:\TEMP\TTS_EX1.DBF'
SQL> alter tablespace tts_ex1 read write;
Tablespace altered.
SQL> alter tablespace tts_ex2 read write;
Tablespace altered.
SQL> update emp set ename=lower(ename);
14 rows updated.
And that's it; the files are attached to the database. The final step shows they get attached in a READ ONLY mode (this makes sense, as they were read-only when we transported them). We might need to alter them after attachment. If you would like to test this out on a single database, you could execute these commands, or their equivalent, on your database, after you make the tablespace READ WRITE
again, but before you perform the IMP:
SQL> drop tablespace tts_ex1 including contents;
Tablespace dropped.
SQL> drop tablespace tts_ex2 including contents;
Tablespace dropped.
SQL> host erase c:\oracle\oradata\tkyte816\tts_ex?.db
Getting the DDL
This is a nice side effect of EXP. We can use it to generate much of the DDL from our database. We've seen how this works in Chapter 6, Tables, where I used EXP and IMP to see a more verbose CREATE TABLE statement.
There are two ways to get the DDL: SHOW = Y and INDEXFILE = filename. I always recommend using the INDEXFILE option and never the SHOW = Y option. The latter is designed to show us what EXP would do if it was actually executed. The format of the output it produces is unusable as it is – it tends to wrap the DDL statements at awkward places and adds double quotes. Additionally, there is no clear
delineation between the individual commands themselves. SHOW = Y is OK as a last ditch effort to extract some DDL, if that is all you have.
I can now run EXP and IMP as follows:
C:\>exp userid=tkyte/tkyte owner=tkyte
C:\>imp userid=tkyte/tkyte full=y indexfile=tkyte.sql
No comments:
Post a Comment