Thursday 6 September 2012

Useful 11gR2 Grid (Clusterware commands)

The syntax to start and stop the listener is as follows:
--------------------------------------------------------
srvctl start listener -n nodename
srvctl stop listener -n nodename

Create a disk group with SQL:
----------------------------
CREATE DISKGROUP ACFS NORMAL REDUNDANCY
FAILGROUP ACFS2
DISK 'ORCL:DATA7' SIZE 1953 M ,'ORCL:DATA8' SIZE
2329 M
FAILGROUP ACFS1
DISK 'ORCL:DATA5' SIZE 1953 M ,'ORCL:DATA6' SIZE
1953 M
ATTRIBUTE 'compatible.rdbms' = '11.2.0.0.0',
'compatible.asm' = '11.2.0.0.0',
'compatible.advm' = '11.2.0.0.0' ;

Controlling Oracle Clusterware -
------------------------------

crsctl start crs

crsctl stop crs

crsctl enable crs

crsctl disable crs

To determine the overall health on a specific node: 

crsctl check crs

To check the viability of CSS across nodes :

crsctl check cluster

To determine the location of the Voting disk:

crsctl query css votedisk

To determine the location of the OCR:

cat /etc/oracle/ocr.loc


Checking the integrity of Oracle Clusterware Configuration Files:

grep voting <grid_home>/log/<hostname>/cssd/ocssd.log

or

cluvfy comp ocr -n all -verbose

or

ocrcheck

To add or remove Voting disks (11gR2) for NON-ASM storage -

crsctl delete css votedisk path_to_voting_disk

crsctl add css votedisk path_to_voting_disk

To add Voting disk to ASM -

crsctl replace votedisk +asm_disk_group

To migrate voting disks from non-ASM storage devices to ASM or vice versa, specify the ASM disk group name or path 

to the non-ASM storage device:

crsctl replace votedisk {+asm_disk_group |path_to_voting_disk}

Locating OCR Manual Backups -

ocrconfig -showbackup auto

Default target location for each automatically generated OCR backup file -

<Grid_home>/cdata/<Cluster_name>

Changing the automatic OCR backup location (Best Practice is to store in shared location)-

ocrconfig -backuploc <New_path_name>


Add an OCR location to either ASM or other storage device:

ocrconfig -add +DATA2
ocrconfig -add /dev/sde1

To replace the current OCR location:
 ocrconfig -replace /dev/sde1 -replacement +DATA2

To repair OCR configuration, run this command on the node on which you have stopped Oracle Clusterware:
You cannot perform this operation on a node on which Oracle Clusterware is running.

ocrconfig -repair -add +DATA1

Removing the OCR Location (Do not perform any OCR removal unless there is atleast one other active OCR location

online-

ocrconfig -delete +DATA2
ocrconfig -delete /dev/sde1

Performing Manual OCR backups-

ocrconfig -manualbackup

ocrconfig -showbackup manual

To perform Logical backup -

ocrconfig -export /home/oracle/ocr.backup

To determine the list of interfaces available to the cluster -

oifcfg iflist –p -n

To determine the public and private interfaces that have configured -

oifcfg getif

To determine the VIP hostname, VIP address, VIP subnet mask, VIP interface names -

srvctl config nodeapps -a

Applying Bug Patch in Rolling mode to CRS & DB Home

PLATFORM  - RHEL 5.7
DB_VERSION  - 11.2.0.1.0
GRID_VERSION  - 11.2.0.1.0
DB_HOME  - /u01/app/oracle/product/11.2.0/db_1
GRID_HOME  - /u01/app/11.2.0/grid
PATCH APPLIED - BUG PATCH 13339443 FOR BOTH CRS AND DB
PATCH DESC - Patch 13339443: ESSC: CRS AGENT KILLS ASM INSTANCE DUE TO ORA-15042


***** ON NODE 1 ******************


TO VERIFY LSINVENTORY -
---------------------


/u01/app/11.2.0/grid/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc


/u01/app/oracle/product/11.2.0/db_1/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc




TO STOP THE DATABASE RESOURCE -
------------------------------


/u01/app/oracle/product/11.2.0/db_1/bin/srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -s /u01/log/tmp.log -n node1




AS ROOT EXECUTE BELOW TO UNLOCK THE FILE PERMISSIONS-
----------------------------------------------------


/u01/app/11.2.0/grid/crs/install/rootcrs.pl -unlock




AS ORACLE USER (PRE-PATCH, APPLY PATCH ON BOTH HOMES, POST-PATCH)-
-----------------------------------------------------------------


cd /u01/software/grid_patch/13339443/custom/scripts


./prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/db_1


/u01/app/11.2.0/grid/OPatch/opatch napply -local -oh /u01/app/11.2.0/grid -id 13339443


cd /u01/software/grid_patch/13339443/customer/server


/u01/app/oracle/product/11.2.0/db_1/OPatch/opatch napply custom/server/ -local -oh /u01/app/oracle/product/11.2.0/db_1 -id 13339443


cd /u01/software/grid_patch/13339443/custom/scripts


./postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/db_1




AS ROOT EXECUTE BELOW TO LOCK THE FILE PERMISSIONS -
--------------------------------------------------


/u01/app/11.2.0/grid/crs/install/rootcrs.pl -patch




TO START THE DATABASE RESOURCE AS ORACLE USER-
----------------------------------------------


/u01/app/oracle/product/11.2.0/db_1/bin/srvctl start home -o /u01/app/oracle/product/11.2.0/db_1 -s /u01/log/tmp.log -n node1



[oracle@node1 ~]$ cat /u01/log/tmp.log
db-racdevdb



TO VERIFY THE LSINVENTORY -
-------------------------


/u01/app/11.2.0/grid/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc


/u01/app/oracle/product/11.2.0/db_1/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc




NOTE: The applied patch would be displayed in the lsinventory only after the patch is applied to last node of the cluster


****************************************************************************


*** FOLLOW THE ABOVE FOR NODE2 AS WELL, BUT CHANGE THE HOSTNAME ACCORDINGLY ***


TO VERIFY THE LSINVENTORY -
-------------------------


/u01/app/11.2.0/grid/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc


/u01/app/oracle/product/11.2.0/db_1/OPatch/opatch lsinventory -local -invPtrLoc /u01/app/11.2.0/grid/oraInst.loc


*********************************************************************************

Useful ASM commands

ASM COMMANDS -
------------
We can create the asm diskgroup using an xml file, example as below -


vi <file_name>.xml


<dg name="FRA" redundancy="external">
<dsk> <dsk string="ORCL:ASMDISK09"/>
<dsk> <dsk string="ORCL:ASMDISK10"/>
<dsk> <dsk string="ORCL:ASMDISK11"/>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>
<a name="compatible.advm" value="11.2"/>
</dg>


asmcmd mkdg /home/oracle/labs/less_02/<file_name>.xml  ---> to invoke the xml file to create diskgroup


$ asmcmd
ASMCMD> lsdg   ---> to diskplay the diskgroups
ASMCMD> lsdsk -G <dg_name>   ---> to list the disk of the specified diskgroups
ASMCMD> mount <dg_nam> ---> to mount a diskgroup



To list the attributes of a ASM diskgroup -
-----------------------------------------


SELECT G.NAME DISK_GROUP, A.NAME ATTRIBUTE, A.VALUE
 FROM V$ASM_ATTRIBUTE A, V$ASM_DISKGROUP G
 WHERE A.GROUP_NUMBER = G.GROUP_NUMBER
 AND G.NAME = 'DATA';


ASMCMD> lsattr -lm -G <diskgroup_name>


To find current space utilization of Disk Groups-
-------------------------------------------------


SELECT NAME, TYPE, TOTAL_MB, FREE_MB,
 REQUIRED_MIRROR_FREE_MB RMFM, USABLE_FILE_MB
 FROM V$ASM_DISKGROUP;


ASMCMD> lsdg




To retrieve disks information in a disk group -
---------------------------------------------


SELECT G.NAME DISK_GROUP, D.NAME, D.STATE, D.TOTAL_MB,
 D.FREE_MB
 FROM V$ASM_DISK D, V$ASM_DISKGROUP G
 WHERE D.GROUP_NUMBER = G.GROUP_NUMBER
 AND G.NAME = 'DATA';


ASMCMD> lsdsk -k


To drop disk from diskgroup -
---------------------------


ALTER DISKGROUP <DiskGroup_Name> DROP DISK <DISK1>, <DISK2>;




SELECT * FROM V$ASM_OPERATION;


ASMCMD> lsop




To view the connected Clients -
-----------------------------


SELECT G.NAME DISK_GROUP, C.*
 FROM V$ASM_CLIENT C, V$ASM_DISKGROUP G
 WHERE C.GROUP_NUMBER = G.GROUP_NUMBER
 AND G.NAME = 'DATA';


ASMCMD> lsct

Useful CRS commands

CRS COMMANDS -
------------

pgrep -l d.bin

$GRID_HOME/bin/crsctl stat res -t    ---> to check the resources in all nodes

crsctl check crs                ---> To check the clusterware running in current node

crsctl stop crs ---> To stop the crs in current node

crsctl start crs ---> To start the crs in current node

crsctl check cluster

crsctl query css votedisk ---> To determine the voting disks file location

ocrcheck ---> To determine the location of OCR

crsctl stat res ora.<dg_name>.dg -t  ---> To verify the specified diskgroup is online in all the nodes

ocrconfig -add <dg_name> ---> To add second OCR location to be stored in ASM diskgroup

cat /etc/oracle/ocr.loc ---> To see the changes made to the file referencing the new OCR location

ocrconfig -delete <dg_nam> ---> To remove the OCR file from the disk group

ocrconfig -showbackup ---> To list the automatic backups of OCR and nodes in which they were performed

ocrconfig -manualbackup ---> To perform manual backup of OCR

ocrconfig -export /home/oracle/ocr.bkp  ---> To perform logical backup of OCR to the path /home/oracle/ocr.bkp

ocrconfig -showbackup manual ---> To list the list of manual backups of OCR

ocrconfig -local ---> To determine the location of OLR

orcconfig -local -export /home/olr.bkp ---> To perform logical backup of the OLR

Creating a RAC Database manaully


Creating a RAC Database.


DB Name : layatest
Instance 1 : layatest1
Instance 2 : layatest2

ASM Diskgroups - DATA1 and DATA3




1) Create the Required directories in both the nodes 


mkdir -p /u01/app/oracle/admin/layatest/adump
mkdir -p /u01/app/oracle/admin/layatest/bdump
mkdir -p /u01/app/oracle/admin/layatest/cdump
mkdir -p /u01/app/oracle/admin/layatest/udump
mkdir -p /u01/app/oracle/admin/layatest/pfile


2) Create a parameter file.



###########################################
log_archive_dest_1='LOCATION=+DATA2/'
log_archive_format=%t_%s_%r.dbf
db_block_size=8192
db_file_multiblock_read_count=16
open_cursors=300
db_name=layatest
background_dump_dest=/u01/app/oracle/admin/layatest/bdump
core_dump_dest=/u01/app/oracle/admin/layatest/cdump
user_dump_dest=/u01/app/oracle/admin/layatest/udump
db_create_file_dest=+DATA1
compatible=10.2.0.3.0
processes=150
sga_target=285212672
audit_file_dest=/u01/app/oracle/admin/layatest/adump
remote_login_passwordfile=exclusive
pga_aggregate_target=94371840
undo_management=AUTO
control_files=("+DATA1","+DATA3")
cluster_database_instances=2
#cluster_database=true
layatest1.instance_number=1
layatest2.instance_number=2
layatest2.thread=2
layatest1.undo_tablespace=UNDOTBS1
layatest1.thread=1


3) Create password file in one node and copy that to all the nodes.



rhelrac1-> cd $ORACLE_HOME/dbs
rhelrac1-> orapwd file=orapwlayatest1 password=admin entries=5
rhelrac1-> scp orapwlayatest1 oracle@rhelrac2:/u01/app/oracle/product/10.2.0/db_1/dbs/orapwlayatest2


4) Create spfile using the pfile and the edit the init ora parameter to point the spfile in both the instance.




rhelrac1-> export ORACLE_SID=+ASM1

rhelrac1-> asmcmd

ASMCMD> lsdg

State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED EXTERN N N 512 4096 1048576 3858 3760 0 3760 0 DATA1/
MOUNTED NORMAL N N 512 4096 1048576 3765 3483 1255 1114 0 DATA2/
MOUNTED EXTERN N N 512 4096 1048576 3000 2903 0 2903 0 DATA3/
MOUNTED EXTERN N N 512 4096 1048576 3858 3760 0 3760 0 FLASH/
MOUNTED EXTERN N N 512 4096 1048576 1137 1044 0 1044 0 ORADATA/
MOUNTED NORMAL N N 512 4096 1048576 1928 1742 0 871 0 ORANORMAL/

ASMCMD> cd data2
ASMCMD> ls
ASMCMD> mkdir layatest
ASMCMD> cd layatest
ASMCMD> mkdir spfile
ASMCMD> cd spfile
ASMCMD> pwd
+data2/layatest/spfile

Crate the spfile in the above location +DATA2/layatest/spfile

rhelrac1-> export ORACLE_SID=layatest1
rhelrac1-> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat Mar 17 14:47:25 2012

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> create spfile='+DATA2/layatest/spfile/spfilelayatest.ora' from pfile='/u01/app/oracle/admin/layatest/pfile/initlayatest.ora';

File created.

SQL> exit

Now edit the pfiles in the oracle home location and point to the spfile. Repeat the same in all the nodes.

rhelrac1-> cd $ORACLE_HOME/dbs
rhelrac1-> vi initlayatest1.ora
spfile=+DATA2/layatest/spfile/spfilelayatest.ora

:wq!


rhelrac2-> cd $ORACLE_HOME/dbs
rhelrac2-> vi initlayatest2.ora
spfile=+DATA2/layatest/spfile/spfilelayatest.ora



4) Startup the instance in node1.


rhelrac1-> export ORACLE_SID=layatest1
rhelrac1-> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat Mar 17 14:52:30 2012

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1267068 bytes
Variable Size 92277380 bytes
Database Buffers 188743680 bytes
Redo Buffers 2924544 bytes

SQL> create database layatest
2 datafile '+DATA1' size 500m extent management local
3 sysaux datafile '+DATA1' size 500m
4 default temporary tablespace temp tempfile '+DATA1' size 100m
5 undo tablespace undotbs1 datafile '+DATA1' size 100m
6 logfile
7 group 1 ('+DATA1','+DATA2') size 10m,
8 group 2 ('+DATA1','+DATA2') size 10m
9 /


Database created.


SQL> @?/rdbms/admin/catalog.sql;

PL/SQL procedure successfully completed.

SQL> @?/rdbms/admin/catproc.sql;

PL/SQL procedure successfully completed.

-- Create undo tablespace for the 2nd node.

SQL> create undo tablespace undotbs2 datafile '+DATA1' size 100m;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='layatest2';

System altered.

-- Enable the cluster database.

SQL> alter system set cluster_database=true scope=spfile sid='*';

System altered.

--Create online redo log file for the 2nd node.

SQL> alter database add logfile thread 2 group 3 ('+DATA1','+DATA3') size 10m;

Database altered.

SQL> alter database add logfile thread 2 group 4 ('+DATA1','+DATA3') size 10m;

Database altered.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.



--Startup the database.

SQL> startup
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1267068 bytes
Variable Size 96471684 bytes
Database Buffers 184549376 bytes
Redo Buffers 2924544 bytes
Database mounted.
Database opened.


-- Enable the thread 2

SQL> alter database enable public thread 2;

Database altered.

--Create RAC specific Views.

SQL> @?/rdbms/admin/catclust.sql;

PL/SQL procedure successfully completed.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> exit


-- Register the databse in OCR.

rhelrac1-> srvctl add database -d layatest -o /u01/app/oracle/product/10.2.0/db_1 -p +DATA2/layatest/spfile/spfilelayatest.ora

rhelrac1-> srvctl add instance -d layatest -i layatest1 -n rhelrac1

rhelrac1-> srvctl add instance -d layatest -i layatest2 -n rhelrac2

rhelrac1-> srvctl config database -d layatest

rhelrac1 layatest1 /u01/app/oracle/product/10.2.0/db_1
rhelrac2 layatest2 /u01/app/oracle/product/10.2.0/db_1


rhelrac1-> srvctl start database -d layatest

rhelrac1-> srvctl status database -d layatest

Instance layatest1 is running on node rhelrac1
Instance layatest2 is running on node rhelrac2

--End of Database creation.


--Enable the archive log mode.


SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination +DATA2
Oldest online log sequence 42
Current log sequence 43

--Disable the cluster database.

SQL> alter system set cluster_database = false scope=spfile sid='layatest1';

System altered.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> exit

-- Stop the database and mount the database in the node1

rhelrac1-> srvctl stop database -d layatest

-- Mount the database exclusively in node 1

rhelrac1-> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat Mar 17 15:32:47 2012

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1267068 bytes
Variable Size 96471684 bytes
Database Buffers 184549376 bytes
Redo Buffers 2924544 bytes
Database mounted.

--enable archive log

SQL> alter database archivelog;

Database altered.

--Enable the cluster database.

SQL> alter system set cluster_database = true scope=spfile sid='layatest1';

System altered.

SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
rhelrac1-> srvctl start database -d layatest
rhelrac1-> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Sat Mar 17 15:35:01 2012

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +DATA2
Oldest online log sequence 42
Next log sequence to archive 43
Current log sequence 43
SQL>exit

-------------------------------------------------------------------------------------

Setting local listeners:-
------------------------


alter system set local_listener = 'listener_layatest1' scope=spfile sid='layatest1';
alter system set local_listener = 'listener_layatest2' scope=spfile sid='layatest2';

srvctl stop database -d layatest


tnsnames.ora on both nodes
--------------------------

layatest1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac1-vip) (PORT = 1532))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest) (INSTANCE_NAME = layatest1))
)

listener_layatest1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac1-vip) (PORT = 1532))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest) (INSTANCE_NAME = layatest1))
)
layatest2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac2-vip) (PORT = 1532))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest) (INSTANCE_NAME = layatest2))
)
listener_layatest2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac2-vip) (PORT = 1532))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest) (INSTANCE_NAME = layatest2))
)
layatest =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac1-vip) (PORT = 1532))
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac2-vip) (PORT = 1532))
(LOAD_BALANCE = yes)
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest) (INSTANCE_NAME = layatest1))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = layatest2 (INSTANCE_NAME = layatest2))
)

Listener.ora in node1
---------------------

lsnrlayatest_rhelrac1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC) (KEY = extproc_layatest1))
(ADDRESS = (PROTOCOL = TCP)(HOST = rhelrac1-vip)(PORT = 1532) (IP = FIRST))
)
)
ADMIN_RESTRICTIONS_lsnrlayatest_rhelrac1=ON


--------------------------------------------
Start the listener:-

lsnrctl start lsnrlayatest_rhelrac1



Listener.ora in node2
---------------------

lsnrlayatest_rhelrac2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC) (KEY = extproc_layatest2))
(ADDRESS = (PROTOCOL = TCP) (HOST = rhelrac2-vip) (PORT = 1532) (IP = FIRST))
)
)
ADMIN_RESTRICTIONS_lsnrlayatest_rhelrac2=ON

--------------------------------------------

Start the Listener

lsnrctl start lsnrlayatest_rhelrac2
-----------------------------------


start the database.
-------------------

srvctl start database -d layatest

Wednesday 1 August 2012

Creating Oracle 10g RAC database on 11g Grid Infrastructure

In Oracle 11g r2 grid Infrastructure software the ASM and clusterware has been combined into one Oracle home and ASM is part of  clusterware. Clusterware files such as OCR and Voting disks are created under ASM diskgroups. Unlike, In Oracle 10g where voting disks and OCR must be configured before ASM installation they were either part of OCFS(linux) or shared raw filesystem(Solaris).

Please check Pre-11g database version compatibility with 11g Grid Infrastructure before Installing for more information please check MOS ID: 337737.1.

The following certification matrix is from MOS ID : 337737.1
I will be installing 10rR2 (10.2.0.4) Database in 11g R2 Grid Infrastructure. I am assuming that you have already installed 11gR2 Grid Infrastructure and ASM is up and running.

$srvctl status asm
ASM is running on node1, node2

Go to Oracle Database software location and Install database software(10.2.0.1).

$ /usr/orasw/10g_software/database/runInstaller


Run root.sh on node1, node2 as root

Before creating 10g database I decided to install 10.2.0.4 patch set 3 on top of 10.2.0.1 

login as oracle software owner and install patch set 3

$ /usr/orasw/10g_software/10204_patch/Disk1/runInstaller



Oracle 10g Software with patch set 3 has been installed with no issues. It’s time to create RAC database.

Start DBCA from 10g $ORACLE_HOME/bin

$ cd $ORACLE_HOME/bin
$ dbca









Where is my Configuration Assistant page for Real application clusters?


Humm...not looking right as DB configuration assistant is not showing cluster database selection page. Anyway just to see what happens next  I decided to move forward and selected ASM Storage Management for database files.








It failed at 2% with “ORA-29702” . I have aborted the installation.



Checked cluster nodes status.

$ olsnodes -t
node1        Unpinned
node2        Unpinned

If the cluster is a newly created 11.2 Grid Infrastructure (CRS), and you want to create pre 11.2 RAC db, you need to pin the corresponding nodes. In our case their status are showing as Unpinned. If Grid Infrastructure was upgraded to 11.2, then you no need to pin. Read more Installing Pre 11.2 Database in 11GR2 Grid Infrastructure environment at My Oracle Support Document Id: 948456.1

 $ crsctl pin css -n node1 node2
/usr/lib/hpux64/dld.so: Unable to find library 'libhasgen10.so'.

Make sure to login as root before making any cluster related changes.
$su –

# crsctl pin css -n node1 node2

CRS-4664: Node node1 successfully pinned.
CRS-4664: Node node2 successfully pinned.

#exit

$ olsnodes -t
node1        Pinned
node2        Pinned

Our nodes are showing Pinned status 

now start dbca again from ORACLE_HOME/bin
$ cd $ORACLE_HOME/bin
$ dbca




Now I am able to see RAC database screen.. So far so good.







It failed again with the error that DBCA is unable to communicate to ASM even-though ASM is up and running.

“ORA-15260: permission denied on ASM disk group”

I decided to abort the installation the installation and looked into MOS for possible solutions and this is what I came across.

Due to unpublished bug 8288940, DBCA will fail if database files are located in ASM. Oracle's recommendation is to apply 8288940 patch. 

I download and appled patch 8288940 on top of 10.2.0.4. Please follow instructions in README.txt file for patch installation steps.

started dbca again for 10gR2 ORACLE_HOME

$ cd $ORACLE_HOME/bin
$ dbca 













Now installer is prompting to provide ASM password.  






I can see DATA disk group that I have created during 11gR2 grid Infrastructure installation.
Finally, After selecting next in next couple of screens I am able to create 10g DB on 11Gr2 Grid infrastructure.

Please look into my other blogs on Oracle 11gR2 Grid Infrastructure installation steps and read more on Installing Pre -11.2 Database in 11GR2 Grid Infrastructure environment at  My Oracle Support Document Id: 948456.1