Monday 30 July 2012

My experience of converting a cross platform (AIX to HP) single instance database to RAC using RCONFIG tool

 Its been really more than a month that I didn't write anything on my blog due to my busy schedule.  Well, I am here now to share my experience of converting a cross platform (AIX to HP) single instance database to 2 node RAC using the RCONFIG tool.

If you were following my blog, you might aware that we have a plan of implementing 16 node production RAC on HP UX Superdom Itanium servers using EMC DMX4 SAN storage. 
As part of production implementation, last week, I have configured initial 8 node RAC successfully and the immediate plans were to migrate the databases across operating systems (AIX to HP ux) and convert a single instance database to RAC. I have done this exercise on the development setup sucessfully, the only change this time is that I have used RCONFIG utility to convert single instance database to RAC. Earlier I have used DBCA method to do the same.

Oracle provides following methods to convert a single instance database to RAC:

Grid Control
DBCA
Manual
RCONFIG(from 10gR2)

I must say, each DBA may choose a different method according to his/her convineance to achieve the result. The following illustrates how I have migrated a database from AIX to HP UX and from a single instance to RAC (on 2 node).

Database and OS details

Source:

Database 10gR2 (10.2.0.4)
OS : AIX
non-ASM

Target:

Databae 10gR2 (10.2.0.4)
OS : HP UX Superdom Itaninum
ASM

RCONFIG brief
RCONFIG tool is introuced in Oracle 10gR2 and the main functionality of this tool is to convert single instance database to RAC. 
The tool(rconfig) can be found under $ORACLE_HOME/bin directry.
The xml input file (ConvertToRAC.xml) resides under$ORACLE_HOME/assistants/rconfig/sampleXMLs (it is recommended to copy the file before using it).
Log files(rconfig.log and etc) can be found under $ORACLE_HOME/cfgtoollogs/rconfig

The Convert verify option in the ConvertToRAC.xml file has three options:
  • Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC conversion have been met before it starts conversion
  • Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
  • Convert verify="ONLY" : rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks 

Before you actually starts the convert process, you can run the rconfig with convert verify=ONLY option which basically check all the prerequisites. At the end, the result code either would be 0 or 1. Where 0 stands for operation success and 1 stand for operation failures and in case of failure, it will also gives you the reason of failure.

Following RMAN command converts the database from AIX to HP UX platform.

RMAN> convert database new database 'ORCL'
 transport script '/tmp_backup/tempdb/transport.sql'
 to platform 'HP-UX IA (64-bit)'
 db_file_name_convert
'+DG_ORCL/ORCL/datafile','/tmp_backup/tempdb';
Move all the converted datafiles to target machine and follow the ML NOTE: 414878.1  for further details on how to start the on target machine.

Following illustrate how to convert single instance database to RAC using the RCONFIG tool:

cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC.xml convertdb.xml

modify the convertdb.xml file according to your environment. Following is the example:

==
convert_t24uat.xml 46 lines, 2964 characters 
 ?xml version=1.0 encoding=UTF-8?
 RConfig xmlns:n=http://www.oracle.com/rconfig
           xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
           xsi:schemaLocation=http://www.oracle.com/rconfig
    ConvertToRAC
  Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|O
NLY
        Convert verify=NO
 Specify current OracleHome of non-rac database for SourceDBHome
              SourceDBHome /u00/app/oracle/product/10.2.0/db_1 SourceDBHome
 Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
              TargetDBHome/u00/app/oracle/product/10.2.0/db_1 TargetDBHome
 Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
              SourceDBInfo SID=ORCL
                Credentials
                  User sys User
                  Password password Password
                  RolesysdbaRole
                Credentials
              SourceDBInfo
 ASMInfo element is required only if the current non-rac database uses ASM Storage
              ASMInfo SID=+ASM1
                Credentials
                  User sys User
                  Password passwordasm Password
                  RolesysdbaRole
                Credentials
              ASMInfo
 Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.
              NodeList
                Node name=rac01
                Node name=rac02
              NodeList
 Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.
              InstancePrefix ORCL InstancePrefix
 !--Specify port for the listener to be configured for rac database.If port=, alistener existing on localhost will be used for rac
 database.The listener will be extended to all nodes in the nodelist --
              Listener port=""
 Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.
              SharedStorage type=ASM
 Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.
                TargetDatabaseArea+DG_ORCL TargetDatabaseArea
 Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac dat
abase will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have
a recovery area.
         TargetFlashRecoveryArea +DG_ORCL Target FlashRecoveryArea
        SharedStorage
        Convert
     ConvertToRAC
 RConfig

===
Once you modify the convert.xml file according to your environment, use the following command to run the tool:

rconfig convertdb.xml

Oracle does the following during conversion:
oracle@usogp01: /u00/app/oracle/product/10.2.0/db_1/bin # ./rconfig convertdb.xml
Converting Database ORCL. to Cluster Database. Target Oracle Home : /u00/app/oracle/product/10.2.0/db_1.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Flash Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Adding NetService entries
Starting Cluster Database
Starting Listeners
    
      
        
          Operation Succeeded
        
      
      
         /u00/app/oracle/product/10.2.0/db_1
      
        
          ORCL1<\SID>
          ORCL2<\SID>
        <\SIDList>    
    
  
oracle@usogp01: /u00/app/oracle/product/10.2.0/db_1/bin #

==
As you can see the exit code is 0, which indicates the success of the conversion process.


Uphills faced during the conversion process

The only error which I got during the conversion was missing listener.ora under ORACLE_HOME/network/admin directory as we had created listener under the ASM home (we have separate RDBMS and ASM homes). Problem has been resolved by creating the link under the RDBMS network admin directory of ASM listener.ora file.

We have contacted one of the developer of rconfig utiity to clarify the two changes (following) to make sure it will work:

non-ASM to ASM 
single instance to RAC.

He said that the main purpose of this utility is to convert single instance database to RAC and optinally few non-ASM of target database to ASM while converting to RAC. But, he is not sure about moving the entire non-ASM to ASM while conversion. As they don't have any test case. However, asked us to go ahed and try. Well, we were amazed that it worked for us and we delightly informed the developer that two changes are worked for us and you can have our reference.

Our next migration/conversion is our EBusiness suit and of course, I am going to share my expereince here.

References

http://download-west.oracle.com/docs/cd/B19306_01/install.102/b14201/cvrt2rac.htm#sthref1273


Important Meta Link Notes:

ML Note:387046.1 RCONFIG : Frequently Asked Questions
ML Note:371519.1 RCONFIG Error When Converting From Single Instance To RAC
ML Note:375472.1 RCONFIG FAILS with an null pointer EXCEPTION
ML Note:391297.1 RCONFIG fails when using ASM as Shared Storage
ML Note:388577.1 Using Oracle 10g Release 2 Real Application Clusters and
 Automatic Storage Management with Oracle E-Business Suite 
 Release 12

Terminating sessions on a specific RAC database instance

As a DBA, we typically use the 'ALTER SYSTEM KILL SESSION' statement to terminate (kill) any specific session on the database, and if the session is active and involved with any operations, we tend to get the 'ORA-00031: session marked for kill' and the session won't be terminated until the outstanding activity complete.

I then learned about the IMMEDIATE (ALTER SYSTEM KILL SESSION ...... IMMEDIATE') clause that most probably terminates the session without actually waiting for the outstanding activity to complete.  In addition, it also records the information (SID,PID) with the database alert.log file.
Today, I come across about terminating sessions on a specific RAC database instance in Oracle 11gR2 (I am not sure about Oracle 11gR1). This will give the flexibility to terminate a session on a specific RAC database instance across cluster environment. The following is the syntax and example of the enhancement:

Syntax 
KILL SESSION 'sid, serial#[, @integer3]'
 
Example
ALTER SYSTEM KILL SESSION '1287, 9823, @2' IMMEDIATE; 
 

Bug 14026888 - Datapump import gets ORA-600 [kpudpxcs_ctxConvertStream_ref_1] [ID 14026888.8] - ORA-31693,ORA-02354,ORA-00600

When a full database datapump import operations executed this morning to load data from an Oracle 11gR1 db to 11gR2 db, the job got failed with the following ORA errors on over 50 tables that has LOB data type:

ORA-31693: Table data object failed to load/unload & is being skipped due to
ORA-02354: error in exporting/importing data
ORA-00600: internal error code, arguments: [kpudpxcs_ctxConvertStream_ref_1],  
              [SYS_TYPEID("SHAPE")

A very quick research about an ORA-600 error in the metalink brought the following facts and workaround to our knolwedge:



Affects:

Product (Component)Oracle Server (Rdbms)
Range of versionsbelieved to be affectedVersions BELOW 12.1
Versionsconfirmed as being affected
Platforms affectedGeneric (all / most platforms affected)

Fixed:

This issue is fixed in




Since there is no fix available yet, we applied the workaround, used conventional exp/imp, mentioned in the notes. Bravo, the workaround does worked and the issue got resolved.. 

Note : The BUG appears to be a generic one and applicable to all platforms.

Interpreting Explain Plans

Rman Incremental Back up And Restore


Incremental backup and restore in oracle 10g

Here I am presenting the simple steps to demonstrate how to take incremental level backup,perform restore and recovery using RMAN.One can learn Rman incremental level backup easily by going through this post. I used oracle 10g express edition.

Difference between differential and cumulative incremental backup will be clear by going through this simple demo.

Simple scenarios has been taken.

Demonstrating DIFFERENTIAL INCREMENTAL BACKUP

1. Database is in NOARCHIVELOG mode.
2. Not using recovery catalog.
3. RMAN configuration setting that I used is following. I am posting the output of
RMAN> SHOW ALL;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/xe/backup/cf_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 400 M FORMAT '/usr/lib/oracle/xe/backup/df_%U';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/dbs/snapcf_XE.f'; # default

4. Since database is in NOARCHIVELOG mode, it is necessary to shut down the database cleanly using immediate,normal or transactional options.

RMAN> shutdown

5. To take incremental 0 level backup,mount the database. This step is required even in non-incremental backups.

RMAN> startup mount

6. Issue
RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'FULL_INC';
This takes the full backup of the database and also includes controlfile as well as spfile.

7. Issue
SQL> select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 0 52428800 1 YES UNUSED
0

2 1 1 52428800 1 NO CURRENT
1077758 08-MAR-11


SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
------------------
1084366


8. RMAN> ALTER DATABASE OPEN;

9. Now, perform some DML operations. For example,

SQL> CREATE TABLE T1(C1 NUMBER);
SQL> INSERT INTO T1 VALUES(10);
SQL> /
SQL> /
SQL> COMMIT;

10. Again, shutdown the database to take incremental level backup.
RMAN>SHUTDOWN
RMAN>STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1';
It backups only the changes made since the last incremental level n or lower backup.

11. Open the database again to perform some insertions.
RMAN> ALTER DATABASE OPEN;

12. SQL> INSERT INTO T1 VALUES(1);
SQL> /
SQL> /
SQL> /
SQL> COMMIT;

13. Shutdown the database again to take incremental level 2 backup.
RMAN> SHUTDOWN
RMAN> STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE TAG 'INC_1';

14. Open database again to made some insertions.
RMAN> ALTER DATABASE OPEN;
SQL> INSERT INTO T1 VALUES(9);
SQL> /
SQL> /
SQL> COMMI;

15. select * from v$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 0 52428800 1 YES UNUSED
0

2 1 1 52428800 1 NO CURRENT
1077758 08-MAR-11


SQL> select checkpoint_change# from v$database;


CHECKPOINT_CHANGE#
------------------
1084663

Here all the changes since the last backup has been stored in the redo logs.

16. Now, delete controlfile.
$ mv oradata/XE/controlfile cf.bak

17. RMAN> SHUTDOWN ABORT;

18. RMAN> STARTUP
Database will not open. Database will only go upto the NOMOUNT state since controlfile has been lost.

19. RMAN> Now following steps are needed to recover the database.

20. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/xe/backup/cf_%F';
If controlfile autobackup is located in default location, then we can skip this step.

21. RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP

22. RMAN> ALTER DATABASE MOUNT;

23. RMAN> RESTORE DATABASE;
It will restore the database using the 0 level backup.

24. RMAN> RECOVER DATABASE
Watch the output carefully. You can recognise various backups that are being applied. Look for the tags that you have given to backupsets. It will applies all the incrementals one by one. First it will apply level 1 incremental, and then level 2. Then it will search for appropriate log sequence and applies the same if found. If log switching has not been taken place after the last incremental backup, then we get all the data without any data loss. And database will restore upto the current point in time. In our case no log switching has taken place and all the data since the last backup exist in the redologs. NOREDO option is needed when log switching has taken place. NOREDO option is not needed if log switch has not taken place.

25.RMAN> ALTER DATABASE OPEN RESETLOGS;

26. Now, view the table t1 and you will find the table restored upto the latest point in time if all the redos has been applied.

27. Always take full backup of the database after opening the database in resetlogs mode.

Demonstrating CUMULATIVE INCREMENTAL BACKUP

Here we will utilize same scenario as above,i.e. no recovery catalog mode,no archivelog mode.
We will start here by taking incremental level 0 backup.

1. RMAN> SHUTDOWN 
2. RMAN> STARTUP MOUNT 
3. RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'FULL_INC';
4. RMAN> ALTER DATABASE OPEN;

5. SQL> SELECT CHECKPOINT_CHANGE# FROM V$DATABASE;
6. Make some insertion.
SQL> INSERT INTO T1 VALUES(2);
SQL> /
SQL> /
SQL> COMMIT;

7. Again shutdown the database to take incremental level 1 database which will copies only the changed blocks since the last incremental level 0 backup.
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1';

8.ALTER DATABASE OPEN;

9. SQL> INSERT INTO T1 VALUES(3);
SQL> /
SQL> /
SQL> COMMIT;

10. Again shutdown the database to take cumulative incremental level 1 backup, this time. This backups all the changes made after the last n-1 or lower backup,here it will backup all the changes since the incremental level 0 backup.
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'CUM_1';

11. RMAN> ALTER DATABASE OPEN;

12. Perform some DML again.
SQL> INSERT INTO T1 VALUES(9);
SQL>/
SQL> /
SQL> COMMIT;

13. Now, we will shutdown the database to take incremental level 1 backup this time.
RMAN> SHUTDOWN
RMAN> STARTUP MOUNT
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INC_1_1';

14. RMAN> ALTER DATABASE OPEN;

15. SQL> INSERT INTO T1 VALUES(0);
SQL> /
SQL> COMMIT;

16. Delete controlfile.
$ mv oradata/XE/controlfile cf.bak

17. RMAN> SHUTDOWN ABORT

18. RMAN> STARTUP
Database will not open.It will go upto only NOMOUNT state.

19. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/usr/lib/oracle/XE/backup/cf_%F';

20. RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

21. RMAN> ALTER DATABASE MOUNT;

22. RMAN> RESTORE DATABASE;,
It will restore the database from the last incremental level 0 database.

23. RMAN> RECOVER DATABASE;
Here NOREDO option is not necessary as it is assumed that redo logs has not been switched and complete recovery is possible. Check the same by looking at the contents of v$log and comparing with the checkpoint_change# that have noted in step 5. If all the changes since that value exist in the redo logs then NOREDO option is not needed. Else it is required.

Here first of all, cumulative incremental level 1 backup would restore as there is no need for the backup having tag 'INC_1' as cumulative incremental level 1 backup take backup of all the changes since the last level 0 backup. Thus our first incremental level 1 backup is not applied. Then it applies backup having tag 'INC_1_1' which we take after cumulative incremental level backup. After that it will apply all the changes recorded in the online redologs.
Thus , only two incremental level backups needed to apply here. But in the first scenario all the incremental backups had been applied. Thus we can say that using cumulative incremental backup in your incremental strategy provides faster recovery as number of incremental backups to be applied is less. 

24. RMAN> ALTER DATBASE OPEN RESETLOGS;

25. Take whole database backup after opening database in resetlogs mode. It is a good practice to perform th same.

26. Now check your table. It must have all the changes that we made, keeping all the scenario same.

Linux Top Command Usege

top
The top command is probably the most useful one for an Oracle DBA managing a database on Linux. Say the system is slow and you want to find out who is  gobbling up all the CPU and/or memory. To display the top processes, you use the command top.

Note that unlike other commands, top does not produce an output and sits still. It refreshes the screen to display new information. So, if you just issue top and leave the screen up, the most current information is always up. To stop and exit to shell, you can press Control-C.

It's probably needless to say that the top utility comes in very handy for analyzing the performance of database servers. Here is a partial top output








Let's analyze the output carefully. The first thing you should notice is the "idle" column under CPU states; it's 0.0%—meaning, the CPU is completely occupied doing something. The question is, doing what? Move your attention to the column "system", just slightly left; it shows 5.6%. So the system itself is not doing much. Go even more left to the column marked "user", which shows 1.0%. Since user processes include Oracle as well, Oracle is not consuming the CPU cycles. So, what's eating up all the CPU?

The answer lies in the same line, just to the right under the column "iowait", which indicates 91.2%. This explains it all: the CPU is waiting for IO 91.2% of the time.

So why so much IO wait? The answer lies in the display. Note the PID of the highest consuming process: 16143. You can use the following query to determine what the process is doing:

select s.sid, s.username, s.program
from v$session s, v$process p
where spid = 16143
and p.addr = s.paddr
/
       SID USERNAME PROGRAM
------------------- -----------------------------
       159 SYS      rman@prolin2 (TNS V1-V3)

The rman process is taking up the IO waits related CPU cycles. This information helps you determine the next course of action.

TO FIND SCHEMA SIZE OF DATABASE

SQL> set pagesize 10000
SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF "Size of Each Segment in MB" ON REPORT
SQL> select segment_type, sum(bytes/1024/1024) "Size of Each Segment in MB" from dba_segments where owner='SYS' group by segment_type order by 1;

SEGMENT_TYPE       Size of Each Segment in MB
------------------          --------------------------
CLUSTER                              1872.0625
INDEX                                   1419.3125
INDEX PARTITION                    166.25
LOB PARTITION                          .1875
LOBINDEX                                17.1875
LOBSEGMENT                        478.8125
NESTED TABLE                                   1
ROLLBACK                                          2
TABLE                                      1218.875
TABLE PARTITION                 441.3125
TYPE2 UNDO                         1055.5625
                                    --------------------------
TOTAL                                    6672.5625

11 rows selected.

Row Migration & Row chaining

What is Row Chaining
-----------------------------


Row Chaining happens when a row is too large to fit into a single database block. For example, if you use a 8KB block size for your database and you need to insert a row of 16KB into it, Oracle will use 2/3 blocks and store the row in chain of data blocks for that segment. And Row Chaining happens only when the row is being inserted.

you can show the block size of database by
SQL> conn / as sysdba
SQL> show parameter db_block_size



What is Row Migration
---------------------
­­­­-----------

Row Migration Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the block's free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change.



How can identified Row Chaining and row migration
-------------------------------------------------------------------------


There are 3 ways :-     1) by Analyze command
                                    2) USER_TABLES
                                    3) V$SYSSTAT

Sunday 29 July 2012

Crontab Details


00 18 * * 0 /db/bin/stats_009_bdp01.sh >/ora_bdp01/df/log/stats_009_bdp01.log$$ 2>&1

00 18 * * 0 /db/bin/stats_078_btp01.sh >/ora_btp01/df/log/stats_078_btp01.log$$ 2>&1

00 18 * * 0 /db/bin/stats_020_bcp01.sh >/ora_bcp01/df/log/stats_020_bcp01.log$$ 2>&1


00 17 * * 0 /db/bin/stats_002_bkp01.sh >/ora_bkp01/df/log/stats_002_bkp01.log$$ 2>&1

00 17 * * 0 /db/bin/stats_014_brp01.sh >/ora_brp01/df/log/stats_014_brp01.log$$ 2>&1

00 16 * * 0 /db/bin/stats_062_bgp01.sh >/ora_bgp01/df/log/stats_062_bgp01.log$$ 2>&1

00 16 * * 0 /db/bin/stats_065_bxp01.sh >/ora_bxp01/df/log/stats_065_bxp01.log$$ 2>&1



*/5 * * * *  /home/user/test.pl

#  .---------------- minute (0 - 59)
#  |   .------------- hour (0 - 23)
#  |   |   .---------- day of month (1 - 31)
#  |   |   |   .------- month (1 - 12) OR jan,feb,mar,apr ...
#  |   |   |   |  .----- day of week (0 - 6) (Sunday=0 or 7)  OR sun,mon,tue,wed,thu,fri,sat
#  |   |   |   |  |
#  *   *   *   *  *  command to be executed



I want a job to start every day at 22:00.

* 22 ***  ss.sh ( script will run everyday at 10 PM

I want a job to start once a week at 18:00.

* 18**0 ss.sh (Script will run 6 PM every sunday)

I want a job to only run at 19:00 on 2/14/2011.

*19 14 2 * ss.sh (Script will run at 7 PM date 14 and Month Feb)

I want a job to run the 1st of each month.

* 19 1 1-12 * ss.sh (script will run at 7PM 1st day of every month).


$crontab -e for edit
$crontab -l for list

Golden Gate - Unidirectional Replication Initial Load

Golden Gate:-

A) Setting up Unidirectional replication - Initial Load

Secnario 1)

Replicating between two database on the same host.


Source db - test11
Target db - layatest
HostNam - rhel.manzoor.com


Steps :-
------

a) Go to OTN http://edelivery.oracle.com
b) Select Oracle Fusion Middleware and select the platform accordinfly.
c) Download the latest version of GG software.
d) ftp, winscp the gg software to the linux machine.
e) Create a gg home directory

$ mkdir -p /u01/oracle/product/11.2.0/gg

d) Add the below environment variables in .bash_profile.

$ vi .bash_profile

export ORACLE_GG=/u01/oracle/product/11.2.0/gg
export PATH=$ORACLE_HOME/bin:$ORACLE_GG:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_GG

e) Copy the gg software to gg home and unzip the file and then untar the file


$ tar -xvf fbo_ggs_Linux_x86_ora11g_32bit.tar

f) Connect to golden gate command line interface and create the necessary directories.

[oracle@rhel u01] cd $ORALE_GG
[oracle@rhel gg]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (rhel.manzoor.com) 2> CREATE SUBDIRS

Creating subdirectories under current directory /u01/oracle/product/11.2.0/gg

Parameter files /u01/oracle/product/11.2.0/gg/dirprm: created
Report files /u01/oracle/product/11.2.0/gg/dirrpt: created
Checkpoint files /u01/oracle/product/11.2.0/gg/dirchk: created
Process status files /u01/oracle/product/11.2.0/gg/dirpcs: created
SQL script files /u01/oracle/product/11.2.0/gg/dirsql: created
Database definitions files /u01/oracle/product/11.2.0/gg/dirdef: created
Extract data files /u01/oracle/product/11.2.0/gg/dirdat: created
Temporary files /u01/oracle/product/11.2.0/gg/dirtmp: created
Stdout files /u01/oracle/product/11.2.0/gg/dirout: created


GGSCI (rhel.manzoor.com) 3> exit


g) Create a golden gate user which will be used by Manager / Extract / Replicate Process. Create the user and grant the necessary privileges.


DB : Execute the below on both the database (test11 & layatest)

sql> create tablespace gg_tblspc datafile '/orafiles/test11/ggs_tblspc01.dbf' size 100m;

sql> create user ggowner identified by ggowner default tablespace gg_tblspc temporary tablespace temp;

sql> grant create session to ggowner;

sql> grant connect, resource to ggowner;

sql> grant select any dictionary, select any table, insert any table to ggowner;

sql> grant flashback any table to ggowner;

sql> grant execute on dbms_flashback to ggowner;

sql> grant execute on utl_file to ggowner;

sql> grant create table to ggowner;

-- Enable the supplemental logging, If supplemental log is not enabled it may cause extract process to handle key update incorrectly if key column is not in first row piece.

sql> alter database add supplemental log data (all) columns;

Database altered

h) Test whether you can able to connect to the db using the above crediantials.


GGSCI (rhel.manzoor.com) 3> dblogin userid ggowner@test11, password ggowner
Successfully logged into database.

GGSCI (rhel.manzoor.com) 4> dblogin userid ggowner@layatest, password ggowner
Successfully logged into database.

i) Configure the Manager process and start it. 

Port -
The only mandatory parameter that we need to specify is the PORT which defines the port on the local system where the manager process is running. The default port is 7809 and we can either specify the default port or some other port provided the port is available and not restricted in any way.

AUTOSTART -
Some other recommended optional parameters are AUTOSTART which which automatically 
start the Extract and Replicat processes when the Manager starts.

USERID PASSWORD
The USERID and PASSWORD parameter and required if you enable GoldenGate DDL support and this is the Oracle user account that we created for the Manager(and Extract/Replicat).

PURGEOLDEXTRACTS
The Manager process can also clean up trail files from disk when GoldenGate has finished processing them via the PURGEOLDEXTRACTS parameter. Used with the USECHECKPOINTS clause, it will ensure that until all processes have fnished using the data contained in the trail files, they will not be deleted.


GGSCI (rhel.manzoor.com) 5> EDIT PARAMS MGR
PORT 7809
USERID ggowner, PASSWORD ggowner
PURGEOLDEXTRACTS /u01/oracle/product/11.2.0/gg/dirdat/ex, USECHECKPOINTS


- The Manager process can be started using - START MANAGER 
and stopped using - STOP MANAGER
and the status can be checked using - INFO MANAGER

Now start the Manager process.

GGSCI (rhel.manzoor.com) 2> START MANAGER

Manager started.


GGSCI (rhel.manzoor.com) 3> INFO MANAGER

Manager is running (IP port rhel.manzoor.com.7809).


## Initial data load.


Here we are going to replicate the manzoor.emp table which is in test11 to manzoor.emp in layatest

On source db we need to create a extract process, the process should be defined with
group name, in this example we use the group name as LDTESTDB


db: test11

SQL> select count(*) from manzoor.emp;

COUNT(*)
----------
100000

db : layatest

SQL> select count(*) from manzoor.emp;

COUNT(*)
----------
0


GGSCI (rhel.manzoor.com) 8> ADD EXTRACT LDTESTDB, SOURCEISTABLE
EXTRACT added.

-- Here the data is present in the table and not in the archive/redo log files, hence 
we need to mention SOURCEISTABLE.

Create parameter file for the extract process.

EXTRACT : name of the extract group
USERID/PASSWORD : the database user which has been configured earlier for Extract ( this user is created in the source database)
RMTHOST : This will be the IP address or hostname of the target system
MGRPORT : the port where the Manager process is running
TABLE : specify the table which is being extracted and replicated. This can be specified in a number of ways 
using wildcard characters to include or exclude tables as well as entire schemas.


GGSCI (rhel.manzoor.com) 10> EDIT PARAMS LDTESTDB

EXTRACT LDTESTDB
USERID ggowner@test11, PASSWORD ggowner
RMTHOST rhel.manzoor.com, MGRPORT 7809
RMTTASK replicat, GROUP LDLAYADB
TABLE manzoor.emp;


## Now create the replicate proces, the group should be the name which we have speicified in the extract process parameters using RMTTASK , GROUP. which is LDLAYADB

GGSCI (rhel.manzoor.com) 15> ADD REPLICAT LDLAYADB, SPECIALRUN
REPLICAT added.


-- Since it is a special run we need to include the keyword SPECIALRUN.

Create the parameter file for the replicat process.

REPLICAT : name of the Replicat group created for the initial data load
USERID/PASSWORD : database credentials for the Replicat user (this user is created in the target database)

ASSUMETARGETDEFS : this means that the source table structure exactly matches the target database table structure.

MAP : with GoldenGate we can have the target database structure entirely differ from that of the source in terms of table names as well as the column definitions of the tables. This parameter provides us the mapping of the source and target tables which is same in this case


GGSCI (rhel.manzoor.com) 4> EDIT PARAMS LDLAYADB

REPLICAT LDLAYADB
USERID ggowner@layatest, PASSWORD ggowner
ASSUMETARGETDEFS
MAP manzoor.emp, TARGET manzoor.emp;

## Now we can start the extract process for the initial load.


GGSCI (rhel.manzoor.com) 1> START EXTRACT LDTESTDB

Sending START request to MANAGER ...
EXTRACT LDTESTDB starting


## Check the status using the info command.

GGSCI (rhel.manzoor.com) 2> INFO EXTRACT LDTESTDB

EXTRACT LDTESTDB Initialized 2012-07-27 19:35 Status RUNNING
Checkpoint Lag Not Available
Log Read Checkpoint Not Available
First Record Record 0
Task SOURCEISTABLE

## On the info we can see that the status shows as RUNNING.




## For any error we can view the error by viewing the report.

GGSCI (rhel.manzoor.com) 3> VIEW REPORT LDTESTDB


GGSCI (rhel.manzoor.com) 1> INFO EXTRACT LDTESTDB

EXTRACT LDTESTDB Last Started 2012-07-28 09:19 Status STOPPED
Checkpoint Lag Not Available
Log Read Checkpoint Table MANZOOR.EMP
2012-07-28 09:22:18 Record 100000
Task SOURCEISTABLE


## Now the status shows stopped and the record shows 100000, let have a check
on record cound on the target table.

SQL> select count(*) from manzoor.emp;

COUNT(*)
----------
100000


-- Target table has been replicated correctly as per the source.