Centos 4.5安装RAC

Centos 4.5安装RAC

一定要用vmware server做RAC实验,之前用gsx server做RAC一直出错。
GSX 时间同步很难解决,后来换vmware server一次就通过!
切记,教训呀,死耗gsx白花了一周时间!
vi /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
/sbin/sysctl -p

vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

vi /etc/pam.d/login
session required /lib/security/pam_limits.so

vi /etc/modprobe.conf
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
vi /etc/rc.d/rc.local
在文件的末尾加入一行
/sbin/modprobe hangcheck_timer
# modprobe hangcheck_timer
检查hangcheck是否成功启动
# grep hangcheck /var/log/messages | tail -2
如果显示下述信息说明已经成功启动hangcheck
Mar 16 12:52:32 node2 kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 180 seconds, margin is 60 seconds).


安装rmp
eject
mount /dev/cdrom /mnt
cd /mnt/CentOS/RPMS/
rpm -Uvh setarch-1*
rpm -Uvh compat-libstdc++-33-3*
rpm -Uvh make-3*
rpm -Uvh glibc-2*
rpm -Uvh openmotif-2*
rpm -Uvh compat-db-4*
rpm -Uvh gcc-3*
rpm -Uvh libaio-0*
rpm -Uvh rsh-*
rpm -Uvh compat-gcc-32-3*
rpm -Uvh compat-gcc-32-c++-3*
rpm -Uvh openmotif21*

cd /
eject

用户与组
groupadd -g 700 dba
useradd -u 500 -g dba oracle

cd /
mkdir oracle
chown -R oracle.dba /oracle

su oracle
cd $HOME
vi .bash_profile
export ORACLE_BASE=/oracle
export ORACLE_TERM=xterm
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export ORA_CRS_HOME=/oracle/product/crs
export ORACLE_HOME=/oracle/product/database
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
export PATH=$CRS_HOME/bin:$ORACLE_HOME/bin:$PATH
export ORACLE_SID=RAC1
ulimit -u 16384 -n 65536
umask 022

source .bash_profile

Copy node1 到node2
修改主机名
vi /etc/sysconfig/network
vi /etc/hosts
127.0.0.1 localhost

10.8.30.38 node1-priv
10.8.30.39 node2-priv

192.168.0.100 node1
192.168.0.101 node2

192.168.0.200 node1-vip
192.168.0.201 node2-vip

停用sendmail不然启动特别慢
chkconfig sendmail off
chkconfig --list sendmail

ping -c2 node1
ping -c2 node2
ping -c2 node1-priv
ping -c2 node2-priv

修改SID
vi .bash_profile

融合oracle用户的ssh
node2
su oracle
cd $HOME
mkdir .ssh
chmod 700 .ssh
cd .ssh
s

sh-keygen -t rsa
ssh-keygen -t dsa
cat *.pub >authorized_keys
从node2 copy到node1
scp authorized_keys node1:/home/oracle/.ssh/authorized_keys
node1
su oracle
cd $HOME
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
cat *.pub >>authorized_keys
scp authorized_keys node2:/home/oracle/.ssh/authorized_keys
验证,分别在节点1/2输入以下命令
ssh node1 date
ssh node2 date

ssh node1-priv date
ssh node2-priv date


时间同步设置
vmware两节点会有约二十秒的差异,RAC安装过程会有警告出现,大致是NULL....记不清楚,不用理会,继续即可。
node1
IP地址192.168.0.100
ntp服务器
vi /etc/ntp.conf
[root@node1 ~]# cat /etc/ntp.conf
server 127.127.1.0
fudge 127.127.1.0 stratum 11
driftfile /var/lib/ntp/drift
broadcastdelay 0.008

chkconfig --list ntpd
chkconfig ntpd on
service ntpd restart
netstat -tlunp
udp 端口123表明ntp服务器启动


ntpstat
synchronised to local net at stratum 12
time correct to within 949 ms
polling server every 64 s

ntptrace -n node1
node1: stratum 12, offset 0.000000, synch distance 0.449304

ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) LOCAL(0) 11 l 18 64 37 0.000 0.000 0.008


node2
[root@node2 ~]# cat /etc/ntp.conf
server node1
driftfile /var/lib/ntp/drift
broadcastdelay 0.001

[root@node2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
node1 LOCAL(0) 12 u 9 64 1 1.695 8406.15 0.008
這個 ntpq -p 可以列出目前我們的 NTP 與相關的上層 NTP 的狀態,上頭的幾個欄位的意義為:

* remote:亦即是 NTP 主機的 IP 或主機名稱囉~注意最左邊的符號, 如果有『+』代表目前正在作用當中的上層 NTP ,如果是『*』代表也有連上線,不過是作為次要連線的 NTP 主機。
* refid:參考的上一層 NTP 主機的位址
* st:就是 stratum 階層囉!
* when:幾秒鐘前曾經做過時間同步化更新的動作;
* poll:下一次更新在幾秒鐘之後;
* reach:已經向上層 NTP 伺服器要求更新的次數
* delay:網路傳輸過程當中延遲的時間,單位為 10^(-6) 秒
* offset:時間補償的結果,單位與 10^(-3) 秒
* jitter:Linux 系統時間與 BIOS 硬體時間的差異時間, 單位為 10^(-6) 秒。
[root@node2 ~]# vi /etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid"
SYNC_HWCLOCK=yes

设置RAW
fdisk /dev/sdb..sdf

vi /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
/dev/raw/raw3 /dev/sdd1
/dev/raw/raw4 /dev/sde1
/dev/raw/raw5 /dev/sdf1
/dev/raw/raw6 /dev/sdg1

service rawd

evices restart

chown oracle:dba /dev/raw/raw1
chown oracle:dba /dev/raw/raw2
chown oracle:dba /dev/raw/raw3
chown oracle:dba /dev/raw/raw4
chown oracle:dba /dev/raw/raw5
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
chmod 660 /dev/raw/raw3
chmod 660 /dev/raw/raw4
chmod 660 /dev/raw/raw5



另外
rhel4使用udev来管理设备
手动修改/dev/raw/raw1 不能永久生效
要想使得权限持久生效
需要修改文件 vi /etc/udev/permissions.d/50-udev.permissions 的第113行
raw/*:root:disk:0660
改成
raw/*:oracle:dba:0660

ASM
/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]

/etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
/etc/init.d/oracleasm createdisk VOL2 /dev/sde1
/etc/init.d/oracleasm createdisk VOL3 /dev/sdf1
/etc/init.d/oracleasm createdisk VOL4 /dev/sdg1

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

建议按下列顺序执行
在 node1 上执行: /oracle/oraInventory/orainstRoot.sh
在 node2 上执行: /oracle/oraInventory/orainstRoot.sh
在 node1 上执行: /oracle/product/crs/root.sh
在 node2 上执行: /oracle/product/crs/root.sh

图形界面上运行 root运行 /oracle/product/crs/bin/vipca.sh ,手工重新配置 node1-vip 和 node2-vip 。
root 检查RAC是否正常。
$/oracle/product/crs/cfgtoollogs/configToolFailedCommands.sh
bin/./crs_stat -t
[root@node1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2

删除RAC(转网上教程没测试过)
1、执行rootdelete.sh 、rootdeinstall.sh script文件在$ORA_CRS_HOME/install 目录下

2、Stop the Nodeapps on all nodes:
srvctl stop nodeapps -n

3、以root执行以下删除:
rm /etc/oracle/*
rm -f /etc/init.d/init.cssd

rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs
cp /etc/inittab.orig /etc/inittab
4、如果有其他的进程没有关闭,则执行以下关闭:

ps -ef | grep crs
kill

ps -ef | grep evm

kill

ps -ef | grep css

kill

5、如果没有其他的数据库进程运行,则执行以下删除:

rm -f /var/tmp/.oracle/*

or

rm -f /tmp/.oracle/*

6、删除ocr.loc 文件,所在位置:/etc/oracle

7、清空crs安装目录

8、清除CRS的Oracle Universal Installer

9、清空OCR和裸设备通过DD命令:

dd if=/dev/zero of=/dev/rdsk/V1064_vote_01_20m.dbf bs=1M count=256
dd if=/dev/zero of=/dev/rdsk/ocrV1064_100m.ora bs=1M count=256
10、清空/tmp/CVU* 文件

11、重启服务器

dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256

相关文档
最新文档