Install openfiler O/S,it works like NAS and SAN
Install oracle enterprise linux on both rac nodes
Edit hosts file on both nodes and put entries and remove hostname from first line
vi /etc/hosts
172.22.0.175 rac1
172.22.0.176 rac2
172.22.0.178 rac1-vip
172.22.0.179 rac2-vip
172.22.0.177 openfiler
10.0.0.101 rac1-priv
10.0.0.102 rac2-priv
:wq (save and exit)
Oracle group and user add on both nodes
#groupadd dba
#groupadd oinstall
#useradd -c "Oracle software owner" -G dba -g oinstall oracle
#passwd oracle
Both the group and userid same on both rac nodes
issue the below command to check
#cat /etc/group (for groupid)
#cat /etc/passwd (for userid)
if not change this with below commands
#groupadd -g 1001 oinstall
#groupadd -g 1002 dba
#useradd -u 1001 -g 1001 -G 1002 -d /home/oracle -m oracle
setup sysctl.conf file for configuring kernel parameters on both nodes
kernel.shmmax=2147483648
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem=250 32000 100 128
fs.file-max=65536
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack =1
net.ipv4.tcp_window_scaling = 1
sysctl -p
Download and run compat packages according to system 32bit or 64bit on both nodes
compat-gcc-7.3-2.96.128.i386.rpm
compat-gcc-c++-7.3-2.96.128.i386.rpm
compat-libcwait-2.1-1.i386.rpm
compat-libstdc++-7.3-2.96.128.i386.rpm
compat-libstdc++-devel-7.3-2.96.128.i386.rpm
compat-oracle-rhel4-1.0-5.i386.rpm
issue below command to run compat packages on both nodes
#rpm -ivh compat* --force --aid
Run below package on both nodes to check post installation of oracle RAC using cluvfy later,this package u will get in oracle clusterware software
#rpm -ivh cvuqdisk-1.0.6-1.rpm --force --aid
Create user-equivalence on both nodes
#su - oracle
$ssh-keygen -t rsa
enter
enter
enter
$cd .ssh/
$cat id_rsa.pub >>authorized_keys
$scp authorized_keys to node 2 on same location example:/home/oracle/.ssh
like this create user-equivalence on 2nd node
Check the time on both nodes it should be same if not give below command to make time synchronise
#date 102812542009.35 (the time format is like this monthdatehourminyear.second)
After installing openfiler you get the url like this https://openfiler:446/ or https://172.22.0.177:446/
Copy and paste this url to open storage graphically
First step click on system and configure network access configuration
hostname ipaddress subnetmask share
rac1 172.22.0.175 255.255.0.0 share
rac2 172.22.0.176 255.255.0.0 share
and click on update
Second step click on services and modify
servicename status modification
iscsi disable enable
click on enable in modification column to make status enable
like this enable all services except UPS server
Third step click on block devices
click on edit disk
/dev/hdc
now you will get new screen create primary and logical space
mode
primary extended create
click on create
again you will get
mode
logical physical create
click on create
Fourth step click on volume groups
give volume group name
click on add volume group
click on create
Fifth step click on add volume
give volume name
give volume Description
give Required space(MB)
give File system iscsi
click on create
Sixth step click on ISCSI targets
click on Target IQN and click on add
click on LAN mapping and click on map
click on Network ACL and click allow on access column and finally click on update to complete configuration
Add parameter on both nodes in iscsi.conf file
#vi /etc/iscsi.conf
add this line
DiscoveryAddress=openfiler
After configuration openfiler and add line in iscsi.conf you will get sharing disk on both nodes issue below command as a root user to check
#fdisk -l
(the output shows sharing disk as contains not a valid partition)
Partitions creating on sharing disk
#service iscsi restart
#fdisk -l
#fdisk /dev/sdb
p (type p to print partition)
n (to create new partition)
p (to create primary partition)
1
enter
enter
+2048m (give size and enter this partition is for ocr information)
n
p
2
enter
enter
+2048m (give size and enter this partition is for voting disk information)
n
p
3
enter
enter
+14000M (give size and enter this partition is for data)
n
e (this is extended partition)
enter
enter
n
enter
+25000m (give full remaining space is for data)
:w (save and exit)
#partprobe /dev/sdb (is for update kernel)
#fdisk -l (here you will get mount points)
#mkfs.ext3 /dev/sdb1 (make ocr and voting disk in ext3 format)
#mkfs.ext3 /dev/sdb2
Give this entry and reboot system once to create logical disk on follwing location on both nodes)
#vi /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2
/dev/raw/raw3 /dev/sdb3
/dev/raw/raw4 /dev/sdb4
:wq
#services iscsi restart
#services rawdevices restart
#chown root.oinstall /dev/raw/raw1 (give permissions to all logical devices for raw1 give root.oinstall because here information of ocr)
#chown oracle.oinstall /dev/raw/raw2
#chown oracle.oinstall /dev/raw/raw3
#chown oracle.oinstall /dev/raw/raw4
login as oracle user and check pre and post installation using cluvfy
$/home/oracle/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose (for pre installation)
$/home/oracle/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose (for post installation)
output shows preinstallation checking successfully completed so we can continue installing clusterware software,if it show unsuccessfully check the errors.
If u get error like user equivalence failed again create keygen and copy on both nodes.If u get error like couldn't find a suitable interfaces for vips.We
can ignore this error and create vip manually using vipca.This error gets due to:-
CVU checks for the following criteria before considering set of interface:
-the interfaces should have the same name across nodes.
-they should belong to the same subnet and same netmask
-they should be on public (and routable) network
Oftentimes,the interfaces planned for the vip's are configured on 10.*.m172.16.*,172.31.*or192.168.* networks which are not routable.Hence CVU does not
consider them as suitable for vip's.If name of the available interfaces satistfy this criteria,cvu complain "error-couldn't find a suitable set of interfaces
for vip's.It is worth nothing that,such addresses will actually work if things are public but cvu just things they're private and reports accordingly.To
invoke vipca go to $ORA_CRS_HOME/bin and run the following command as root user after *.sh scripts are run on both nodes at cluster software installation.
#./vipca
1)click on next button
2)select eth0 and click on next button.
3)Type in the IP alias name for the vip and IP address and click on next
4)lastly click on finish
This will configure startup the vip.Now you can go back to the first screen (means last screen of clusterware software) and click ok.
http://www.idevelopment.into/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_18.shtml
Installation clusterware software:
$cd /home/oracle/cluster/
$./runInstaller
u will get window like specify cluster configuration
click on add
public node name (give in bracket(rac2))
private node name (give in bracket(rac2-priv))
virtual host name (give in bracket(rac2-vip))
and click on ok to continue
u will get next window like specify oracle cluster registry(ocr) location
select on external redundancy and on next line give location
specify ocr location /dev/raw/raw1 and click on next to continue
u will get next window specify voting disk location
select on external redundancy and on next line give location
voting disk location /dev/raw/raw2 and click on next to continue
u will get next window click on install to start installation
At the end u will get window to run root.sh and Orainstroot.sh on both nodes with root user
After running this scripts if u r getting error in pre requistion like couldn't find a suitable interfaces for vip's so configure vip using vipca it resides
in clusterware software bin folder and configure vip's as explain above in this note.
Finally u will get window installation completed successfully
OCR :- keeps information of number of rac nodes
Voting Disk :- Cluster processes vote we are alive.It ups another nodes background process if existing nodes fails.
Installation of ASM:
Using normal 10g software install asm
$/home/oracle/oracle10gsoftware/runInstaller
u will get first window select enterprise edition and click on next
u will get second window specify the location of asm to install ex:/home/oracle/product/10.2/asm
u will get next window specify hardware cluster installation mode
select cluster installation,click on rac2 and finally click on next to continue
u will get next window select configure automatic storage management(ASM)
give password specify asm sys password (sys)
give conform password (sys) and finally click on next to continue
u will get next window select on external
on same window select candidates and also select 1st and 2nd point and click on next to continue
u will get next window to run root.sh on both nodes as root user
finally u will get installation completed successfully click on exit to complete installation
Creating ASM database
$export PATH
$export CLASS_PATH
$export LD_LIBRARY
$dbca if not works go to $ORACLE_ASM/bin and run
u will get 1st window select cluset Real Application and click on next
u will get next window select configure automatic storage management
the output shows rac1 and rac2 on same window and click on select all to continue
u will get next window give password:sys and click on next to continue
u will get next window in that u will get by default Disk group name DATA
in the same window select external
in the same window select show candidates it will give output of third and fourth logical derives like /dev/raw/raw3 /dev/raw/raw4
By default i think selected if not select on both and click on ok to continue
u will get next window click on finish to complete asm instance creation.
Installation of Normal Oracle 10g software
$/home/oracle/oracle10gsoftware/runInstaller
u will get window Specify Hardware Cluster Installation Mode select on cluster installation
in the same window u will get rac1 and rac2 by default it is selected if not select both rac1 and rac2 and click on select all to continue
u will get next window click on next to continue
u will get next window click on next to continue
u will get next window select configuration option select install database software only and click on next to continue
u will get next window click on install to complete installations
Creation of database
$export ORACLE_HOME=/home/oracle/product/10.2/db_1
$export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin
$dbca
u will get first window select oracle RAC and click on next to continue
u will get next window select on create database and click on next ot continue
u will get next window in that u will get by default rac1 and rac2 and click on select all to continue
u will get next window select general purpose and click on next to continue
u will get next window write dbname and click on next to continue
u will get next window click on next to continue
u will get next window give password and click on next to continue
u will get next window select automatic storage management and click on next to continue
u will get next window specify sys password specific to asm give password and click on next to continue
u will get next window select DATA and click on next to continue
u will get next window select on use common location +DATA and click on next to continue
u will get final window db creation completed successfully
u will get prod1 instance on rac1 and prod2 instance name on rac2
u will get dbname prod on both nodes
put the instance entries in /etc/oratab on both nodes
for ex:prod1:/home/oracle/product/10.2/db_1
to check status of resources
#/home/oracle/product/10.2/crs/bin/crs_stat -t
to stop all resources
#/home/oracle/product/10.2/crs/bin/crs_stop -all
to check daemons
#/home/oracle/product/10.2/crs/bin/crsctl check crs
to stop daemons
#/home/oracle/product/10.2/crs/bin/crsctl stop crs
to stop services
#service iscsi stop
#service rawdevices stop
to start services
#service iscsi start
#service rawdevices start
#chown root.oinstall /dev/raw/raw1
#chown oracle.oinstall /dev/raw/raw2
#chown oracle.oinstall /dev/raw/raw3
#chown oracle.oinstall /dev/raw/raw4
vi editor to change from existing to new
:%s/stop/start/g or
:1,$s /stop/start
To uninstall RAC
1)stop crs on both nodes
2)#/home/oracle/product/10.2/crs/install/rootdelete.sh local nosharedvar nosharedhome on bothnodes
3)rm -rf /etc/ora* on both nodes
4)rm -rf /var/tmp/.oracle on both nodes
5)rm -rf /crs /db_1 /asm
6)rm -rf /etc/inittab.no*
7)rm -rf /home/oracle/oraInventory
8)ps -ef|grep pmon
crs
init.d
oracle
$kill -9
9)dd if=/dev/o of=/dev/raw/raw1 bs=8192 count=5000
10)remove if available .crs .cssd .ocmd in /etc/rco.d
To check performance of i/o harddisk.
$dd if=/dev/zero of=/pyrosan/eastdata/santest bs=1024k count=10000 (it copied 10gb file)