Saturday 28 July 2012

Data Pump - 10g

Data Pump - expdp / impdp

Data pump is an server side tool. It cannot be used from the client side.

Data pump files

* dump files :- Table data and Metadata ( Row + Definition)
* log files :- Details of export and import.
* sql files :- All the contents of the dumpfiles will be converted into ddl 
file and stored in sql files after which can be used
to execute those sql files.

Type of processes

a) Master Process. 
b) Worker Process. 

Master Process Dmnn [ Instance -> Dmnn -> pid ]

It will create job, execute job, monitor job. It will assign worker processor, 
create worker process and manages it.

Master table:-

For each and every dp job a master table will get created in that particular user schema. Master table will contain what are the parameters have been used in, how much work has been completed and where it got terminated (Stopped). The Master process refers the master table to execute the process.

The no. of worker process is defined by the parameter parallel. If we specify parallel = 3 then 3 worker process used for that particular operation, the jobs can be completed faster when the no. of worker process are more.


Views:-

USER_DATAPUMP_JOBS
DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$DATAPUMP_JOB
V$DATAPUMP_SESSION

Examples:-

Create directories and Grant read write access to the User:-

create or replace directory expdir as '/u01/app/oradata/orcl'
/

create or replace directory logdir as '/u01/app/oradata/orcl'
/

grant read, write on directory expdir to manzoor
/

grant read, write on directory logdir to manzoor
/

Export Modes:-

Full, tables, tablespaces, schema, transport_tablespace.

To view all the parameter and its definition

$ expdp help=y
$ impdp help=y

Full export

$ expdp username/password@expdbnm directory=expdir dumpfile=full.dmp logdir=logdir
logfile=full.log full=y

Full Import

$ impdp username/password@impdbnm directory=expdir dumpfile=full.dmp logdir=logdir
logfile=fullimp.log full=y

Table Level Export:-

$ expdp username/password@expdbnm directory=expdir dumpfile=tbl.dmp logdir=logdir
logfile=tbl1.log tables=manz.emp

Table level import

Importing to same schema :-

$ impdp username/password@impdp directory=expdir dumpfile=tbl.dmp logdir=logdir
logfile=tbl1imp.log tables=manz.emp tables_exists_action=replace

Import to other schema :-

$ impdp username/password@impdp directory=expdir dumpfile=tbl.dmp logdir=logdir
logfile=tbl1imp.log tables=manz.emp remap_schema=manz:aham 
tables_exists_action=skip

Schema Level Export:- 

$ expdp username/password@expdbnm directory=expdir dumpfile=sch.dmp logdir=logdir
logfile=sch.log schemas=manz

Schema level import to Same Schema:- 

$ impdp username/password@impdp directory=expdir dumpfile=sch.dmp logdir=logdir
logfile=schimp.log schemas=manz

Schema Level import to different Schema:-

$ impdp username/password@impdp directory=expdir dumpfile=sch.dmp logdir=logdir
logfile=schimp.log schemas=manz remap_schema=manz.aham

Tablespace Level Export:-

$ expdp username/password@expdbnm directory=expdir dumpfile=tblspc.dmp logdir=logdir
logfile=tblspc.log tablespaces=users parallel=3


Tablespace Level import to Same tablespace:-

$ impdp username/password@impdp directory=expdir dumpfile=tblspc.dmp logdir=logdir
logfile=tblspcimp.log tablespaces=users 

Tablespace Level import to different tablespace:-

$ impdp username/password@impdp directory=expdir dumpfile=tblspc.dmp logdir=logdir
logfile=tblspcimp.log tablespaces=users remap_tablespace=users:users1

Note : For Transport_Tablespaces method see the Transport tablespace Post.


To Terminate Export Operations:-

When the job is in progress press ctrl+c

Now it will come to export prompt.

Export> help

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.


Export> stop_job
are you sure you wish to stop this job (y/n) : y

To continue the job.
$ expdp username/password attached=job_name

Export> start_job
Export> continue_client




Now the job will get terminated

No comments:

Post a Comment