赞
踩
目录
以下是在VMwareWorkstation 17 Pro中安装测试
节点1 | 节点2 | |
hostname | rac11g_1 | rac11g_2 |
CPU | 2*2 | 2*2 |
内存 | 8G | 8G |
SWAP | 8G | 8G |
本地盘 | 100G | 100G |
磁盘组 | 大小 | 冗余方式 | 用途 |
OCR | 2G*3 | Normal | OCR、Votedisk表决磁盘 |
DATA | 40G | External | 控制文件、数据文件、redo等 |
FRA | 15G | External | 归档、闪回文件等 |
节点1 | 节点2 | |
Public IP | 192.168.23.101 | 192.168.23.102 |
VIP | 192.168.23.103 | 192.168.23.104 |
Private IP | 12.12.12.12 | 12.12.12.13 |
Scan IP | 192.168.23.105 | 192.168.23.105 |
安装前的准备工作,分别在两个节点操作
- [root@localhost ~]# cd /etc/sysconfig/network-scripts
- [root@localhost network-scripts]# vi ifcfg-ens33
- #修改
- BOOTPROTO=static
- ONBOOT=yes
- IPADDR=192.168.23.101
-
- #重启网卡
- [root@localhost ~]# ifdown ens33 && ifup ens33
由于是创建完虚拟机后新增的HostOnly网卡,没有配置文件,可以通过下面方式去新增配置文件
- [root@rac11g_1 ~]# nmcli con show
- NAME UUID TYPE DEVICE
- ens33 405f5126-93bf-472d-970a-f4e2a707c544 ethernet ens33
- Wired connection 1 b98743f7-f514-3b5f-82c8-ef2ac042ef57 ethernet ens36
- virbr0 83e51f05-6627-4910-8b85-b5c2e1b3f683 bridge virbr0
- [root@rac11g_1 ~]# ip addr
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
- link/ether 00:0c:29:69:93:11 brd ff:ff:ff:ff:ff:ff
- inet 192.168.23.101/24 brd 192.168.23.255 scope global secondary noprefixroute ens33
- valid_lft forever preferred_lft forever
- inet6 fe80::98a6:6bd5:2267:8f06/64 scope link noprefixroute
- valid_lft forever preferred_lft forever
- 3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
- link/ether 00:0c:29:69:93:1b brd ff:ff:ff:ff:ff:ff
- inet 192.168.80.128/24 brd 192.168.80.255 scope global noprefixroute dynamic ens36
- valid_lft 1543sec preferred_lft 1543sec
- inet6 fe80::61e9:798:194b:1dc3/64 scope link noprefixroute
- valid_lft forever preferred_lft forever
- 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
- link/ether 52:54:00:ce:35:59 brd ff:ff:ff:ff:ff:ff
- inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
- valid_lft forever preferred_lft forever
- 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
- link/ether 52:54:00:ce:35:59 brd ff:ff:ff:ff:ff:ff
- [root@rac11g_1 ~]# cd /etc/sysconfig/network-scripts/
- [root@rac11g_1 network-scripts]# cp ifcfg-ens33 ifcfg-ens36
- [root@rac11g_1 network-scripts]# vi ifcfg-ens36
- #修改后
- NAME=ens36
- UUID=b98743f7-f514-3b5f-82c8-ef2ac042ef57
- DEVICE=ens36
- IPADDR=12.12.12.13
-
- #重启ens36网卡
- [root@rac11g_1 network-scripts]# ifdown ens36 && ifup ens36
- [root@localhost ~]# hostnamectl set-hostname rac11g_1
- [root@localhost ~]# hostname
- rac11g_1
- [root@rac11g_1 ~]# vi /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
-
- #Public IP
- 192.168.23.101 rac11g_1
- 192.168.23.102 rac11g_2
-
- #VIP
- 192.168.23.103 rac11g_1-vip
- 192.168.23.104 rac11g_2-vip
-
- #Private IP
- 12.12.12.12 rac11g_1-private
- 12.12.12.13 rac11g_2-private
-
- #Scan IP
- 192.168.23.105 rac11g-scan
- [root@rac11g_1 ~]# systemctl stop firewalld.service
- [root@rac11g_1 ~]# systemctl disable firewalld.service
- [root@rac11g_1 ~]# systemctl status firewalld.service
- [root@rac11g_1 ~]# vi /etc/selinux/config
- #修改为
- SELINUX=disabled
-
- #生效
- [root@rac11g_1 ~]# setenforce 0
- [root@rac11g_1 ~]# vi /etc/sysctl.conf
- kernel.shmmax = 4294967296
- kernel.shmall = 1048576
- kernel.shmmni = 4096
- fs.aio-max-nr = 1048576
- fs.file-max = 6815744
- kernel.sem = 250 32000 100 128
- net.ipv4.ip_local_port_range = 9000 65500
- net.core.rmem_default = 262144
- net.core.rmem_max = 4194304
- net.core.wmem_default = 262144
- net.core.wmem_max = 1048586
-
- #使配置生效
- [root@rac11g_1 ~]# sysctl -p
参数 | 说明 |
---|---|
kernel.shmmax | 单个共享内存段最大值,让数据库在一个共享段里容纳整个SGA。举例:假设有8G内存,数据库最大使用内存,不要超过物理内存的80%(SGA+PGA),所以8G内存情况下,SGA要小于4.8G,PGA要小于1.6G,计算shmmax值最大为4.8Gx1024x1024x1024=5153960755 |
kernel.shmall | 控制共享内存页数=shmmax/PAGESIZE;getconf PAGESIZE 查看PAGESIZE |
kernel.shmmni | 共享内存段的最大数量,默认 |
fs.aio-max-nr | 可以拥有的异步IO请求数,避免IO系统故障,默认 |
fs.file-max | 允许的文件句柄最大数目,默认 |
kernel.sem | 空格不能去掉,信号参数,默认 |
net.ipv4.ip_local_port_range | oracle本地ipv4端口范围 |
net.core.rmem_default | 默认 |
net.core.rmem_max | 默认 |
net.core.wmem_default | 默认 |
net.core.wmem_max | 默认 |
vm.nr_hugepages | 大内存页,默认4K/页,数据库建议用2M/页,该参数一定要大于SGA。计算方式:sga_max_size/2M+100~500M。8G内存情况下SGA最大4.8G,(4.8x1024)/2=2457.6,加一点得2600 |
注意:
- kernel.sem这个参数 四个数字:第1个数字应约大于Oracle进程数,第2个数字建议是第1和第4个数字的乘积。这个参数能够满足大部分使用,但对于连接数较高(比如单节点8000个连接)可以设置为: 10000 1280000 512 1024
- kernel.shmall = physical RAM size / pagesize For most systems, this will be the value 2097152. See Note 301830.1 for more information.前为官方文档说明,但是有些教程是shmmax/pagesize
- kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a system with 4GB of physical RAM. See Note:567506.1 for more information.
- kernel.shmmax定义了单个共享内存段的最大值,要放下整个数据库SGA内存的大小。要大于sga_mas_size
- limits文件里配置的memlock要大于或等于kernel.shmmax
- vm.nr_hugepages一定要计算正确,错误的设置会导致虚拟机启动失败(这里我没用该参数)
- [root@rac11g_1 ~]# vi /etc/security/limits.conf
- grid soft nproc 2047
- grid hard nproc 16384
- grid soft nofile 1024
- grid hard nofile 65536
- oracle soft nproc 2047
- oracle hard nproc 16384
- oracle soft nofile 1024
- oracle hard nofile 65536
- oracle hard memlock unlimited
- oracle soft memlock unlimited
说明:
soft代表软件限制,hard代表硬件限制
软件限制到了会报警,硬件限制到了会报错
nproc参数:操作系统级别对每个用户创建的进程数的限制
nofile参数:文件描述符(句柄)一个进程能够打开文件的次数,也就是一个进程能打开多少文件,影响应用的并发度
stack参数:堆大小
memlock参数:限制锁定内存的大小,单位KB,略低于物理内存
- [root@rac11g_1 ~]# vi /etc/pam.d/login
- #加入
- session required pam_limits.so
- [root@rac11g_1 ~]# groupadd -g 1001 oinstall
- [root@rac11g_1 ~]# groupadd -g 1002 dba
- [root@rac11g_1 ~]# groupadd -g 1003 oper
- [root@rac11g_1 ~]# groupadd -g 1004 asmadmin
- [root@rac11g_1 ~]# groupadd -g 1005 asmdba
- [root@rac11g_1 ~]# groupadd -g 1006 asmoper
- [root@rac11g_1 ~]# useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
- [root@rac11g_1 ~]# useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
-
- #修改grid和oracle密码
- [root@rac11g_1 ~]# passwd grid
- [root@rac11g_1 ~]# passwd oracle
- [root@rac11g_1 ~]# mkdir -p /home/u01/app/11.2.0/grid
- [root@rac11g_1 ~]# mkdir -p /home/u01/app/grid
- [root@rac11g_1 ~]# mkdir -p /home/u01/app/oracle
- [root@rac11g_1 ~]# chown -R grid:oinstall /home/u01
- [root@rac11g_1 ~]# chown oracle:oinstall /home/u01/app/oracle
- [root@rac11g_1 ~]# chmod -R 775 /home/u01/
grid:
- [grid@rac11g_1 ~]$ vi .bash_profile
- export ORACLE_BASE=/home/u01/app/grid
- export ORACLE_HOME=/home/u01/app/11.2.0/grid
- export ORACLE_SID=+ASM1
- export LD_LIBRARY_PATH=$ORACLE_HOME/lib
- export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
- export PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
- export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
- umask 022
- [grid@rac11g_1 ~]$ source .bash_profile
oracle:
- [oracle@rac11g_1 ~]$ vi .bash_profile
- export ORACLE_BASE=/home/u01/app/oracle
- export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
- export ORACLE_SID=rac1
- export LD_LIBRARY_PATH=$ORACLE_HOME/lib
- export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
- export TMP=/tmp
- export TMPDIR=$TMP
- export PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
- export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
- umask 022
- [oracle@rac11g_1 ~]$ source .bash_profile
- [root@rac11g_1 ~]# cd /soft/
- [root@rac11g_1 soft]# mount CentOS-7-x86_64-DVD-2009.iso /media/
- [root@rac11g_1 ~]# cd /etc/yum.repos.d/
- [root@rac11g_1 yum.repos.d]# mkdir bak
- [root@rac11g_1 yum.repos.d]# mv CentOS-* bak
- [root@rac11g_1 yum.repos.d]# cd bak
- [root@rac11g_1 bak]# cp CentOS-Media.repo ..
- [root@rac11g_1 bak]# cd ..
- [root@rac11g_1 yum.repos.d]# vi CentOS-Media.repo
- [c7-media]
- name=CentOS-$releasever - Media
- baseurl=file:///media/
- gpgcheck=0
- enabled=1
- [root@rac11g_1 yum.repos.d]# yum clean all
- [root@rac11g_1 yum.repos.d]# yum makecache
- [root@rac11g_1 yum.repos.d]# yum -y install binutils compat-libcap1 gcc gcc-c++ glibc glibc-devel libaio libaio-devel libgcc libstdc++ libstdc++-devel libXi libXtst make sysstat elfutils-libelf-devel unixODBC unixODBC-devel
-
- [root@rac11g_1 soft]# rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
- [root@rac11g_1 soft]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm
说明:
- compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm包需要单独上传安装
- pdksh-5.2.14-37.el5_8.1.x86_64.rpm包需要单独上传安装
- cvuqdisk-1.0.9-1.rpm包在解压后的grid安装包grid/rpm下
- 如果安装pdksh包的时候有冲突,把冲突的那个包卸载了
使用VM虚拟机本身添加共享存储,这种方式比较方便,平常自己搭建测试环境用这种就行
- disk.locking="false"
- scsi1.sharedBus="virtual"
- scsi1:0.deviceType = "disk"(多个磁盘的就加多条)
- scsi1:1.deviceType = "disk"(多个磁盘的就加多条)
- scsi1:2.deviceType = "disk"(多个磁盘的就加多条)
- scsi1:3.deviceType = "disk"(多个磁盘的就加多条)
- diskLib.dataCacheMaxSize="0"
- diskLib.dataCacheMaxReadAheadSize="0"
- diskLib.DataCacheMinReadAheadSize="0"
- diskLib.dataCachePageSize="4096"
- diskLib.maxUnsyncedWrites="0"
- scsi1:0.mode = "independent-persistent"
- scsi1:1.mode = "independent-persistent"
- scsi1:2.mode = "independent-persistent"
- scsi1:3.mode = "independent-persistent"
disk.EnableUUID = "TRUE"
注意:
- 用UDEV方式需要用到UUID
- 创建磁盘要用单个文件和立即分配所有空间,否则安装的时候共享盘会报错
完成上述操作以后,开启虚拟机,使用fdisk -l命令分别在节点1、2查看,注意查询到磁盘盘符一定要相同
以下脚本通过scsi_id获取设备的块设备的唯一标识名wwid
- for i in b c d e f;
- do
- echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""
- done
-
- #或者
-
- for i in b c d e f;
- do
- echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d /dev/\$name\",RESULT==\"`/usr/lib/udev/scsi_id -g -u -d /dev/sd$i`\",NAME=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""
- done
-
注意未分区用 $name,分区用 $parent
- for i in b c d e f;
- do
- echo "KERNEL==\"sd*\",ENV{DEVTYPE}==\"disk\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d \$devnode\",RESULT==\"`/usr/lib/udev/scsi_id -g -u -d /dev/sd$i`\",SYMLINK=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""
- done
- [root@rac11g_1 ~]# cd /etc/udev/rules.d
- [root@rac11g_1 rules.d]# vi 99-oracle-asmdevices.rules
- KERNEL=="sd*",ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="36000c29603092ba8de9e904d0bf7427d",SYMLINK="asm-ocr1",OWNER="grid",GROUP="asmadmin",MODE="0660"
- KERNEL=="sd*",ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="36000c294d210d9cef0f80370939947cb",SYMLINK="asm-ocr2",OWNER="grid",GROUP="asmadmin",MODE="0660"
- KERNEL=="sd*",ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="36000c294a6b42dcbe65b04f415586c39",SYMLINK="asm-ocr3",OWNER="grid",GROUP="asmadmin",MODE="0660"
- KERNEL=="sd*",ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="36000c2984687d8b88fd8a907c80d3232",SYMLINK="asm-data",OWNER="grid",GROUP="asmadmin",MODE="0660"
- KERNEL=="sd*",ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="36000c296e3fe7883081a894d6b840138",SYMLINK="asm-arch",OWNER="grid",GROUP="asmadmin",MODE="0660"
[root@rac11g_1 rules.d]# scp 99-oracle-asmdevices.rules 192.168.23.102:/etc/udev/rules.d
- [root@rac11g_1 rules.d]# /sbin/udevadm trigger --type=devices --action=change
- [root@rac11g_1 rules.d]# /sbin/udevadm control --reload
- [root@rac11g_1 rules.d]# ll /dev/asm*
- lrwxrwxrwx 1 root root 3 Jun 15 14:23 /dev/asm-arch -> sdf
- lrwxrwxrwx 1 root root 3 Jun 15 14:23 /dev/asm-data -> sde
- lrwxrwxrwx 1 root root 3 Jun 15 14:29 /dev/asm-ocr1 -> sdb
- lrwxrwxrwx 1 root root 3 Jun 15 14:23 /dev/asm-ocr2 -> sdc
- lrwxrwxrwx 1 root root 3 Jun 15 14:23 /dev/asm-ocr3 -> sdd
-
- [root@rac11g_1 rules.d]# ll /dev/sd*
- brw-rw---- 1 root disk 8, 0 Jun 15 14:19 /dev/sda
- brw-rw---- 1 root disk 8, 1 Jun 15 14:19 /dev/sda1
- brw-rw---- 1 root disk 8, 2 Jun 15 14:19 /dev/sda2
- brw-rw---- 1 grid asmadmin 8, 16 Jun 15 15:57 /dev/sdb
- brw-rw---- 1 grid asmadmin 8, 32 Jun 15 15:57 /dev/sdc
- brw-rw---- 1 grid asmadmin 8, 48 Jun 15 15:57 /dev/sdd
- brw-rw---- 1 grid asmadmin 8, 64 Jun 15 15:57 /dev/sde
- brw-rw---- 1 grid asmadmin 8, 80 Jun 15 15:57 /dev/sdf
为解决grid安装BUG需要打两个补丁
- #oracle安装包
- p13390677_112040_Linux-x86-64_1of7.zip
- p13390677_112040_Linux-x86-64_2of7.zip
-
- #grid安装包
- p13390677_112040_Linux-x86-64_3of7.zip
-
- #第二个补丁包,第一个补丁在grid安装包rpm目录下
- p18370031_112040_Linux-x86-64.zip
把/soft权限改为775,grid安装包权限改为grid:oinstall,oracle安装包权限改为oracle:oinstall
- [root@rac11g_1 ~]# chmod 775 /soft
- [root@rac11g_1 soft]# chown oracle:oinstall p13390677_112040_Linux-x86-64_1of7.zip
- [root@rac11g_1 soft]# chown oracle:oinstall p13390677_112040_Linux-x86-64_2of7.zip
- [root@rac11g_1 soft]# chown grid:oinstall p13390677_112040_Linux-x86-64_3of7.zip
[grid@rac11g_1 soft]$ unzip p13390677_112040_Linux-x86-64_3of7.zip
进入解压后的安装包,安装cvuqdisk-1.0.9-1.rpm包
- [root@rac11g_1 grid]# cd /soft/grid/rpm
- [root@rac11g_1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
- [grid@rac11g_1 soft]$ cd grid
- [grid@rac11g_1 grid]$ export DISPLAY=192.168.23.1:0.0
- [grid@rac11g_1 grid]$ ./runInstaller
多选了一个简体中文
与hosts文件保持一致,scan名称开头不能以数字开头否则监测不到
修改ASM磁盘路径,配置ASM共享磁盘的时候磁盘名在/dev下
OCR把三块盘都选上,冗余选Normal
密码设为一样的,会给你个弹窗,说你密码强度太低,yes就行
grid软件安装路径和base目录,grid用户环境变量配置好了的
首先点击fix&Check Again
使用root运行修复脚本,两个节点都需执行,执行完脚本后,点击OK,会再次检测环境
修复后再安装缺少的包
pdksh-5.2.14-37.el5_8.1.x86_64.rpm包需手动下载安装,都安装好后再次检测如下图
Device checks for asm:没安装asmlib,使用的udev绑定,可以忽略
Task resolv.conf integrity:这个是因为无法访问设置的DNS ip,对安装没影响
勾选Ignore All 进行下一步
注意此时需要打第二个补丁
- #grid用户
- [grid@rac11g_1 ~]$ cd /soft/
- [grid@rac11g_1 soft]$ unzip p18370031_112040_Linux-x86-64.zip
-
- #节点1,grid用户
- [grid@rac11g_1 ~]$ cd $ORACLE_HOME/OPatch
- [grid@rac11g_1 OPatch]$./opatch napply -local /soft/18370031
-
- #节点2,grid用户,解压安装同节点1
- [grid@rac11g_1 soft]$ scp /soft/p18370031_112040_Linux-x86-64.zip rac11g_2:/soft/
-
- #安装完成用./opatch lsinventory命令检查两节点是否打上
打补丁过程
- [grid@rac11gstd OPatch]$ ./opatch napply -local /soft/18370031
- Oracle Interim Patch Installer version 11.2.0.3.4
- Copyright (c) 2012, Oracle Corporation. All rights reserved.
-
-
- Oracle Home : /home/u01/app/11.2.0/grid
- Central Inventory : /home/u01/app/oraInventory
- from : /home/u01/app/11.2.0/grid/oraInst.loc
- OPatch version : 11.2.0.3.4
- OUI version : 11.2.0.4.0
- Log file location : /home/u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2024-06-17_02-58-51AM_1.log
-
- Verifying environment and performing prerequisite checks...
- OPatch continues with these patches: 18370031
-
- Do you want to proceed? [y|n]
- y
- User Responded with: Y
- All checks passed.
-
- Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
- (Oracle Home = '/home/u01/app/11.2.0/grid')
-
-
- Is the local system ready for patching? [y|n]
- y
- User Responded with: Y
- Backing up files...
- Applying interim patch '18370031' to OH '/home/u01/app/11.2.0/grid'
-
- Patching component oracle.crs, 11.2.0.4.0...
-
- Verifying the update...
- Patch 18370031 successfully applied.
- Log file location: /home/u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2024-06-17_02-58-51AM_1.log
-
- OPatch succeeded.
下面继续执行第二个脚本,节点1执行成功后再在第二个节点执行,两个节点都执行成功后点击OK
这里放一下节点1执行第二个脚本的内容
- [root@rac11g_1 ~]# /home/u01/app/11.2.0/grid/root.sh
- Performing root user operation for Oracle 11g
-
- The following environment variables are set as:
- ORACLE_OWNER= grid
- ORACLE_HOME= /home/u01/app/11.2.0/grid
-
- Enter the full pathname of the local bin directory: [/usr/local/bin]:
- Copying dbhome to /usr/local/bin ...
- Copying oraenv to /usr/local/bin ...
- Copying coraenv to /usr/local/bin ...
-
-
- Creating /etc/oratab file...
- Entries will be added to the /etc/oratab file as needed by
- Database Configuration Assistant when a database is created
- Finished running generic part of root script.
- Now product-specific root actions will be performed.
- Using configuration parameter file: /home/u01/app/11.2.0/grid/crs/install/crsconfig_params
- Creating trace directory
- User ignored Prerequisites during installation
- Installing Trace File Analyzer
- OLR initialization - successful
- root wallet
- root wallet cert
- root cert export
- peer wallet
- profile reader wallet
- pa wallet
- peer wallet keys
- pa wallet keys
- peer cert request
- pa cert request
- peer cert
- pa cert
- peer root cert TP
- profile reader root cert TP
- pa root cert TP
- peer pa cert TP
- pa peer cert TP
- profile reader pa cert TP
- profile reader peer cert TP
- peer user cert
- pa user cert
- Adding Clusterware entries to oracle-ohasd.service
- CRS-2672: Attempting to start 'ora.mdnsd' on 'rac11g_1'
- CRS-2676: Start of 'ora.mdnsd' on 'rac11g_1' succeeded
- CRS-2672: Attempting to start 'ora.gpnpd' on 'rac11g_1'
- CRS-2676: Start of 'ora.gpnpd' on 'rac11g_1' succeeded
- CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac11g_1'
- CRS-2672: Attempting to start 'ora.gipcd' on 'rac11g_1'
- CRS-2676: Start of 'ora.cssdmonitor' on 'rac11g_1' succeeded
- CRS-2676: Start of 'ora.gipcd' on 'rac11g_1' succeeded
- CRS-2672: Attempting to start 'ora.cssd' on 'rac11g_1'
- CRS-2672: Attempting to start 'ora.diskmon' on 'rac11g_1'
- CRS-2676: Start of 'ora.diskmon' on 'rac11g_1' succeeded
- CRS-2676: Start of 'ora.cssd' on 'rac11g_1' succeeded
-
- ASM created and started successfully.
-
- Disk Group OCR created successfully.
-
- clscfg: -install mode specified
- Successfully accumulated necessary OCR keys.
- Creating OCR keys for user 'root', privgrp 'root'..
- Operation successful.
- CRS-4256: Updating the profile
- Successful addition of voting disk 59619274ab3f4f96bfbfa86bc4c6286a.
- Successful addition of voting disk 2c6cbedca0a84fa6bf5800e580bc6994.
- Successful addition of voting disk 50ee34ae3e134f92bf225a1c3081dfee.
- Successfully replaced voting disk group with +OCR.
- CRS-4256: Updating the profile
- CRS-4266: Voting file(s) successfully replaced
- ## STATE File Universal Id File Name Disk group
- -- ----- ----------------- --------- ---------
- 1. ONLINE 59619274ab3f4f96bfbfa86bc4c6286a (/dev/asm-ocr1) [OCR]
- 2. ONLINE 2c6cbedca0a84fa6bf5800e580bc6994 (/dev/asm-ocr2) [OCR]
- 3. ONLINE 50ee34ae3e134f92bf225a1c3081dfee (/dev/asm-ocr3) [OCR]
- Located 3 voting disk(s).
- CRS-2672: Attempting to start 'ora.asm' on 'rac11g_1'
- CRS-2676: Start of 'ora.asm' on 'rac11g_1' succeeded
- CRS-2672: Attempting to start 'ora.OCR.dg' on 'rac11g_1'
- CRS-2676: Start of 'ora.OCR.dg' on 'rac11g_1' succeeded
- Configure Oracle Grid Infrastructure for a Cluster ... succeeded
这个错误是官方要求DNS,属于bug,忽略,点击ok
点击skip
点击next,yes
如果没有打p18370031_112040_Linux-x86-64.zip补丁
执行/home/u01/app/11.2.0/grid/root.sh时报错,卡很久,并报下面的错误
解决办法:
方法1
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
方法2
- [root@rac11g_1 init.d]# cd /var/tmp/.oracle
- [root@rac11g_1 .oracle]# rm -rf npohasd
- [root@rac11g_1 .oracle]# touch npohasd
- [root@rac11g_1 .oracle]# chmod 755 npohasd
方法3
- 1. 创建服务ohas.service的服务文件并赋予权限
- touch /usr/lib/systemd/system/ohas.service
-
- chmod 777 /usr/lib/systemd/system/ohas.service
-
- 2. 往ohas.service服务文件添加启动ohasd的相关信息
-
- vi /usr/lib/systemd/system/ohas.service
-
- 添加如下内容:
-
- [Unit]
- Description=Oracle High Availability Services
- After=syslog.target
-
- [Service]
- ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
- Restart=always
-
- [Install]
- WantedBy=multi-user.target
-
- 3. 加载,启动服务
-
- 重新加载守护进程
-
- systemctl daemon-reload
-
- 设置守护进程开机自动启动
-
- systemctl enable ohas.service
-
- 手工启动ohas服务
-
- systemctl start ohas.service
第一节点再次执行/home/u01/app/11.2.0/grid/root.sh脚本成功
第二节点用同样的方法,报错如下
- [root@rac11g_2 tmp]# /home/u01/app/11.2.0/grid/root.sh
- Performing root user operation for Oracle 11g
-
- The following environment variables are set as:
- ORACLE_OWNER= grid
- ORACLE_HOME= /home/u01/app/11.2.0/grid
-
- Enter the full pathname of the local bin directory: [/usr/local/bin]:
- The contents of "dbhome" have not changed. No need to overwrite.
- The contents of "oraenv" have not changed. No need to overwrite.
- The contents of "coraenv" have not changed. No need to overwrite.
-
- Entries will be added to the /etc/oratab file as needed by
- Database Configuration Assistant when a database is created
- Finished running generic part of root script.
- Now product-specific root actions will be performed.
- Using configuration parameter file: /home/u01/app/11.2.0/grid/crs/install/crsconfig_params
- User ignored Prerequisites during installation
- Installing Trace File Analyzer
- CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac11g_2'
- CRS-2676: Start of 'ora.cssdmonitor' on 'rac11g_2' succeeded
- CRS-2672: Attempting to start 'ora.cssd' on 'rac11g_2'
- CRS-2672: Attempting to start 'ora.diskmon' on 'rac11g_2'
- CRS-2676: Start of 'ora.diskmon' on 'rac11g_2' succeeded
- CRS-2676: Start of 'ora.cssd' on 'rac11g_2' succeeded
-
- Mounting Disk Group OCR failed with the following message:
- ORA-15032: not all alterations performed
- ORA-15017: diskgroup "OCR" cannot be mounted
- ORA-15003: diskgroup "OCR" already mounted in another lock name space
-
-
- Configuration of ASM ... failed
- see asmca logs at /home/u01/app/grid/cfgtoollogs/asmca for details
- Did not succssfully configure and start ASM at /home/u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.
- /home/u01/app/11.2.0/grid/perl/bin/perl -I/home/u01/app/11.2.0/grid/perl/lib -I/home/u01/app/11.2.0/grid/crs/install /home/u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
原因两个节点执行/home/u01/app/11.2.0/grid/root.sh脚本,其实是没有同时执行的,可能是节点一执行成功后没释放,还有就是配置共享存储时,没有设置为独立和立即分配所有空间
解决办法
用root用户进入GI_HOME/install执行./rootcrs.pl -deconfig -force
然后又报错
- [root@rac11g_1 install]# /home/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
- Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /home/u01/app/11.2.0/grid/crs/install) at crsconfig_lib.pm line 703.
- BEGIN failed--compilation aborted at crsconfig_lib.pm line 703.
- Compilation failed in require at /home/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 305.
- BEGIN failed--compilation aborted at /home/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 305.
解决上述错误,复制Env.pm
cp -p /home/u01/app/11.2.0/grid/perl/lib/5.10.0/Env.pm /usr/lib64/perl5/vendor_perl/
再次执行./rootcrs.pl -deconfig -force
- [root@rac11g_1 5.10.0]# /home/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
- Using configuration parameter file: /home/u01/app/11.2.0/grid/crs/install/crsconfig_params
- Network exists: 1/192.168.23.0/255.255.255.0/ens33, type static
- VIP exists: /rac11g_1-vip/192.168.23.103/192.168.23.0/255.255.255.0/ens33, hosting node rac11g_1
- GSD exists
- ONS exists: Local port 6100, remote port 6200, EM port 2016
- CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.crsd' on 'rac11g_1'
- CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.oc4j' on 'rac11g_1'
- CRS-2677: Stop of 'ora.oc4j' on 'rac11g_1' succeeded
- CRS-2677: Stop of 'ora.OCR.dg' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.asm' on 'rac11g_1'
- CRS-2677: Stop of 'ora.asm' on 'rac11g_1' succeeded
- CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac11g_1' has completed
- CRS-2677: Stop of 'ora.crsd' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.ctssd' on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.evmd' on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.asm' on 'rac11g_1'
- CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac11g_1'
- CRS-2677: Stop of 'ora.evmd' on 'rac11g_1' succeeded
- CRS-2677: Stop of 'ora.mdnsd' on 'rac11g_1' succeeded
- CRS-2677: Stop of 'ora.ctssd' on 'rac11g_1' succeeded
- CRS-2677: Stop of 'ora.asm' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac11g_1'
- CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.cssd' on 'rac11g_1'
- CRS-2677: Stop of 'ora.cssd' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.crf' on 'rac11g_1'
- CRS-2677: Stop of 'ora.crf' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.gipcd' on 'rac11g_1'
- CRS-2677: Stop of 'ora.gipcd' on 'rac11g_1' succeeded
- CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac11g_1'
- CRS-2677: Stop of 'ora.gpnpd' on 'rac11g_1' succeeded
- CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac11g_1' has completed
- CRS-4133: Oracle High Availability Services has been stopped.
- Successfully deconfigured Oracle clusterware stack on this node
没有打补丁报一系列错误,有点麻烦
- [root@rac11g_1 ~]# su - grid
- [grid@rac11g_1 ~]$ cd /home/u01/app/11.2.0/grid/deinstall
- [grid@rac11g_1 deinstall]$ ./deinstall
- Checking for required files and bootstrapping ...
- Please wait ...
- Location of logs /tmp/deinstall2024-06-13_03-33-37PM/logs/
-
- ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
-
-
- ######################### CHECK OPERATION START #########################
- ## [START] Install check configuration ##
-
-
- Checking for existence of the Oracle home location /home/u01/app/11.2.0/grid
- Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
- Oracle Base selected for deinstall is: /home/u01/app/grid
- Checking for existence of central inventory location /home/u01/app/oraInventory
- Checking for existence of the Oracle Grid Infrastructure home
- The following nodes are part of this cluster: rac11g_1,rac11g_2
- Checking for sufficient temp space availability on node(s) : 'rac11g_1,rac11g_2'
-
- ## [END] Install check configuration ##
-
- Traces log file: /tmp/deinstall2024-06-13_03-33-37PM/logs//crsdc.log
- Enter an address or the name of the virtual IP used on node "rac11g_1"[rac11g_1-vip]
- >
-
- The following information can be collected by running "/sbin/ifconfig -a" on node "rac11g_1"
- Enter the IP netmask of Virtual IP "192.168.23.103" on node "rac11g_1"[255.255.255.0]
- >
-
- Enter the network interface name on which the virtual IP address "192.168.23.103" is active
- >
-
- Enter an address or the name of the virtual IP used on node "rac11g_2"[rac11g_2-vip]
- >
-
- The following information can be collected by running "/sbin/ifconfig -a" on node "rac11g_2"
- Enter the IP netmask of Virtual IP "192.168.23.104" on node "rac11g_2"[255.255.255.0]
- >
-
- Enter the network interface name on which the virtual IP address "192.168.23.104" is active
- >
-
- Enter an address or the name of the virtual IP[]
- >
-
-
- Network Configuration check config START
-
- Network de-configuration trace file location: /tmp/deinstall2024-06-13_03-33-37PM/logs/netdc_check2024-06-13_03-34-36-PM.log
-
- Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:
-
- Network Configuration check config END
-
- Asm Check Configuration START
-
- ASM de-configuration trace file location: /tmp/deinstall2024-06-13_03-33-37PM/logs/asmcadc_check2024-06-13_03-34-48-PM.log
-
- ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
- Is OCR/Voting Disk placed in ASM y|n [n]: y
-
- Enter the OCR/Voting Disk diskgroup name []:
- Specify the ASM Diagnostic Destination [ ]:
- Specify the diskstring []:
- Specify the diskgroups that are managed by this ASM instance []:
-
-
- ######################### CHECK OPERATION END #########################
-
-
- ####################### CHECK OPERATION SUMMARY #######################
- Oracle Grid Infrastructure Home is:
- The cluster node(s) on which the Oracle home deinstallation will be performed are:rac11g_1,rac11g_2
- Oracle Home selected for deinstall is: /home/u01/app/11.2.0/grid
- Inventory Location where the Oracle home registered is: /home/u01/app/oraInventory
- Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
- ASM instance will be de-configured from this Oracle home
- Do you want to continue (y - yes, n - no)? [n]: y
- A log of this session will be written to: '/tmp/deinstall2024-06-13_03-33-37PM/logs/deinstall_deconfig2024-06-13_03-33-42-PM.out'
- Any error messages from this session will be written to: '/tmp/deinstall2024-06-13_03-33-37PM/logs/deinstall_deconfig2024-06-13_03-33-42-PM.err'
-
- ######################## CLEAN OPERATION START ########################
- ASM de-configuration trace file location: /tmp/deinstall2024-06-13_03-33-37PM/logs/asmcadc_clean2024-06-13_03-36-40-PM.log
- ASM Clean Configuration START
- ASM Clean Configuration END
-
- Network Configuration clean config START
-
- Network de-configuration trace file location: /tmp/deinstall2024-06-13_03-33-37PM/logs/netdc_clean2024-06-13_03-36-44-PM.log
-
- De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
-
- De-configuring listener: LISTENER
- Stopping listener: LISTENER
- Warning: Failed to stop listener. Listener may not be running.
- Listener de-configured successfully.
-
- De-configuring listener: LISTENER_SCAN1
- Stopping listener: LISTENER_SCAN1
- Warning: Failed to stop listener. Listener may not be running.
- Listener de-configured successfully.
-
- De-configuring Naming Methods configuration file on all nodes...
- Naming Methods configuration file de-configured successfully.
-
- De-configuring Local Net Service Names configuration file on all nodes...
- Local Net Service Names configuration file de-configured successfully.
-
- De-configuring Directory Usage configuration file on all nodes...
- Directory Usage configuration file de-configured successfully.
-
- De-configuring backup files on all nodes...
- Backup files de-configured successfully.
-
- The network configuration has been cleaned up successfully.
-
- Network Configuration clean config END
-
-
- ---------------------------------------->
-
- The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
-
- Run the following command as the root user or the administrator on node "rac11g_2".
-
- /tmp/deinstall2024-06-13_03-33-37PM/perl/bin/perl -I/tmp/deinstall2024-06-13_03-33-37PM/perl/lib -I/tmp/deinstall2024-06-13_03-33-37PM/crs/install /tmp/deinstall2024-06-13_03-33-37PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
-
- Run the following command as the root user or the administrator on node "rac11g_1".
-
- /tmp/deinstall2024-06-13_03-33-37PM/perl/bin/perl -I/tmp/deinstall2024-06-13_03-33-37PM/perl/lib -I/tmp/deinstall2024-06-13_03-33-37PM/crs/install /tmp/deinstall2024-06-13_03-33-37PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
-
- Press Enter after you finish running the above commands
-
- <----------------------------------------
- [root@rac11g_2 install]# /tmp/deinstall2024-06-13_03-33-37PM/perl/bin/perl -I/tmp/deinstall2024-06-13_03-33-37PM/perl/lib -I/tmp/deinstall2024-06-13_03-33-37PM/crs/install /tmp/deinstall2024-06-13_03-33-37PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
- Using configuration parameter file: /tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Stop failed, or completed with errors.
- ################################################################
- # You must kill processes or reboot the system to properly #
- # cleanup the processes started by Oracle clusterware #
- ################################################################
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- Successfully deconfigured Oracle clusterware stack on this node
- _03-33-37PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
- Using configuration parameter file: /tmp/deinstall2024-06-13_03-33-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp
-
- Adding Clusterware entries to inittab
- /crs/install/inittab does not exist.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Stop failed, or completed with errors.
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- ################################################################
- # You must kill processes or reboot the system to properly #
- # cleanup the processes started by Oracle clusterware #
- ################################################################
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- error: package cvuqdisk is not installed
- Successfully deconfigured Oracle clusterware stack on this node
两个节点执行完成后,原窗口按回车继续
- Setting the force flag to false
- Setting the force flag to cleanup the Oracle Base
- Oracle Universal Installer clean START
-
- Detach Oracle home '/home/u01/app/11.2.0/grid' from the central inventory on the local node : Done
-
- Delete directory '/home/u01/app/11.2.0/grid' on the local node : Done
-
- Delete directory '/home/u01/app/oraInventory' on the local node : Done
-
- Delete directory '/home/u01/app/grid' on the local node : Done
-
- Detach Oracle home '/home/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac11g_2' : Done
-
- Delete directory '/home/u01/app/11.2.0/grid' on the remote nodes 'rac11g_2' : Done
-
- Delete directory '/home/u01/app/oraInventory' on the remote nodes 'rac11g_2' : Done
-
- Delete directory '/home/u01/app/grid' on the remote nodes 'rac11g_2' : Done
-
- Oracle Universal Installer cleanup was successful.
-
- Oracle Universal Installer clean END
-
-
- ## [START] Oracle install clean ##
-
- Clean install operation removing temporary directory '/tmp/deinstall2024-06-13_03-33-37PM' on node 'rac11g_1'
- Clean install operation removing temporary directory '/tmp/deinstall2024-06-13_03-33-37PM' on node 'rac11g_2'
-
- ## [END] Oracle install clean ##
-
-
- ######################### CLEAN OPERATION END #########################
-
-
- ####################### CLEAN OPERATION SUMMARY #######################
- ASM instance was de-configured successfully from the Oracle home
- Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
- Oracle Clusterware is stopped and successfully de-configured on node "rac11g_2"
- Oracle Clusterware is stopped and successfully de-configured on node "rac11g_1"
- Oracle Clusterware is stopped and de-configured successfully.
- Successfully detached Oracle home '/home/u01/app/11.2.0/grid' from the central inventory on the local node.
- Successfully deleted directory '/home/u01/app/11.2.0/grid' on the local node.
- Successfully deleted directory '/home/u01/app/oraInventory' on the local node.
- Successfully deleted directory '/home/u01/app/grid' on the local node.
- Successfully detached Oracle home '/home/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac11g_2'.
- Successfully deleted directory '/home/u01/app/11.2.0/grid' on the remote nodes 'rac11g_2'.
- Successfully deleted directory '/home/u01/app/oraInventory' on the remote nodes 'rac11g_2'.
- Successfully deleted directory '/home/u01/app/grid' on the remote nodes 'rac11g_2'.
- Oracle Universal Installer cleanup was successful.
-
-
- Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac11g_1,rac11g_2' at the end of the session.
-
- Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac11g_1,rac11g_2' at the end of the session.
- Run 'rm -rf /etc/oratab' as root on node(s) 'rac11g_1' at the end of the session.
- Oracle deinstall tool successfully cleaned up temporary directories.
- #######################################################################
- ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
如果使用了asm磁盘,需要先清理asm磁盘,因为尝试了一次安装,你的ASM磁盘就被标记为used,不能再作为候选磁盘,要想再次使用,需要执行下面的操作,两个节点都操作下。
- [root@rac11g_1 ~]# dd if=/dev/zero of=/dev/asm-ocr1 bs=1024 count=100
- [root@rac11g_1 ~]# dd if=/dev/zero of=/dev/asm-ocr2 bs=1024 count=100
- [root@rac11g_1 ~]# dd if=/dev/zero of=/dev/asm-ocr3 bs=1024 count=100
删除残留文件,两个节点都清理下
- [root@rac11g_1 app]# rm -rf /etc/ora*
- [root@rac11g_1 app]# rm -rf /u01/app/grid/11.2.0/*
- [root@rac11g_1 app]# rm -rf /u01/app/oraInventory
- [root@rac11g_1 app]# rm -rf /etc/oraInst.loc
- [root@rac11g_1 app]# rm -rf /etc/oracle/ocr.loc
grid用户执行asmca
[grid@rac11g_1 grid]$ asmca
安装grid时,创建的OCR及votdisk磁盘组OCR
点击Create创建DATA和FRA磁盘组
注意:共享磁盘一定要设置为独立和立即分配所有空间
- [grid@rac11g_1 ~]$ sqlplus / as sysasm
-
- SQL> select name,state from v$asm_diskgroup;
- NAME STATE
- ------------------------------ ---------------------------------
- DATA MOUNTED
- FRA MOUNTED
- OCR MOUNTED
如果共享磁盘没有设置为独立和立即分配空间,则节点2无法挂载磁盘组,需要先删除磁盘组,再删除磁盘组资源,然后去虚拟机重新删除磁盘,再新建磁盘重新加入udev规则文件
- 磁盘组为mount时:
- SQL> drop diskgroup diskgroup_name;
-
- 如果磁盘组中有任何的数据,需要加入including contents,
-
- SQL> drop diskgroup diskgroup_name including contents;
-
- 如果磁盘组无法mount, 就需要加上force,该操作会清楚磁盘头及磁盘组所有数据:
- SQL> drop diskgroup diskgroup_name force including contents;
-
- SQL> drop diskgroup DATA force including contents;
-
- Diskgroup dropped.
-
-
- 另一个选项excluding contents 是默认选项,该操作仅在磁盘组为空的情况下进行,如果不是,会报错。
-
- 另外建议,创建磁盘组或者其他测试时,避免直接使用test, 可以使用testdg ,test_tb等,避免关键字相关问题
[root@rac11g_1 ~]# /home/u01/app/11.2.0/grid/bin/crsctl delete resource ora.DATA.dg
- [root@rac11g_1 ~]# su - oracle
- [oracle@rac11g_1 ~]$ cd /soft
- [oracle@rac11g_1 soft]$ unzip p13390677_112040_Linux-x86-64_1of7.zip
- [oracle@rac11g_1 soft]$ unzip p13390677_112040_Linux-x86-64_2of7.zip
- [oracle@rac11g_1 soft]$ export DISPLAY=192.168.23.1:0.0
- [oracle@rac11g_1 soft]$ cd database/
- [oracle@rac11g_1 database]$ ./runInstaller
选上简体中文
忽略所有
56%的时候会报这个错,这是CentOS7下安装11gR2的bug
处理方法:
只在节点1做处理,备份一下这个文件ins_emagent.mk,然后在文件中搜索NMECTL,在括号后面加上 -lnnz11,保存
- [root@rac11g_1 ~]# cd /home/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib
- [root@rac11g_1 lib]# cp ins_emagent.mk ins_emagent.mk_bak20240614
- [root@rac11g_1 lib]# vi ins_emagent.mk
然后点击retry
执行完脚本后
[oracle@rac11g_1 database]$ dbca
SGA+PGA要小于物理内存的80%,推荐范围在40-60%
本环境物理内存8G,留给grid集群1G,余7G用于分配SGA+PGA
7Gx80%=5.6G
SGA=5.6x80%=4587M 我给的3584M
PGA=5.6x20%=1146M 我给的980M
最后4564/1024/8=55%,在推荐范围内
选择UTF8
修改open_cursor为500
默认专用模式
可以修改最大数据文件数量,其他参数可以根据自己生产情况来设置
需要一些时间完成创建
备库是RAC单节点
节点1 | |
hostname | rac11gstd |
CPU | 2*2 |
内存 | 8G |
SWAP | 8G |
本地盘 | 100G |
共享存储
磁盘组 | 大小 | 冗余方式 | 用途 |
OCR | 1G*3 | Normal | OCR、Votedisk表决磁盘 |
DATA | 25G | External | 控制文件、数据文件、redo等 |
FRA | 10G | External | 归档、闪回文件等 |
IP地址
节点1 | |
Public IP | 192.168.23.110 |
VIP | 192.168.23.112 |
Private IP | 13.13.13.13 |
Scan IP | 192.168.23.114 |
虚拟机
同主库相同的方法配置
共享存储配置、grid安装、oracle软件安装和主库相同
- shutdown immediate;
- startup mount;
- alter database archivelog;
- alter database open;
- archive log list;
alter system set log_archive_dest_1='location=+FRA';
alter database force logging;
创建主库standby日志文件,每个实例的组数要比logfile多一组
standby logfile=(1+logfile组数)*thread=(1+2)*2=6
- SQL> select * from v$log
-
- GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME
- ---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------- ------------ -------------------
- 1 1 79 52428800 512 2 NO INACTIVE 892360 2024-06-14 15:32:50 917066 2024-06-16 18:37:44
- 2 1 80 52428800 512 2 NO CURRENT 917066 2024-06-16 18:37:44 2.8147E+14
- 3 2 1 52428800 512 2 NO CURRENT 896190 2024-06-14 15:33:02 2.8147E+14 2024-06-14 15:33:02
- 4 2 0 52428800 512 2 YES UNUSED 0 0
添加standby日志
- alter database add standby logfile thread 1 group 5 size 200m;
- alter database add standby logfile thread 1 group 6 size 200m;
- alter database add standby logfile thread 1 group 7 size 200m;
- alter database add standby logfile thread 2 group 8 size 200m;
- alter database add standby logfile thread 2 group 9 size 200m;
- alter database add standby logfile thread 2 group 10 size 200m;
由于添加standby log和redo log大小不一样,修改主库redo log的大小
- 添加日志组
- alter database add logfile thread 1 group 11 size 200m;
- alter database add logfile thread 1 group 12 size 200m;
- alter database add logfile thread 2 group 13 size 200m;
- alter database add logfile thread 2 group 14 size 200m;
-
- 切换日志组,另一个实例也执行
- alter system switch logfile;
-
- 删除日志组,等状态为INACTIVE的时候可删除
- alter database drop logfile group 1;
- alter database drop logfile group 2;
- alter database drop logfile group 3;
- alter database drop logfile group 4;
-
- 添加日志组
- alter database add logfile thread 1 group 1 size 200m;
- alter database add logfile thread 1 group 2 size 200m;
- alter database add logfile thread 2 group 3 size 200m;
- alter database add logfile thread 2 group 4 size 200m;
-
- 切换日志组,另一个实例也执行
- alter system switch logfile;
-
- 删除日志组,等状态为INACTIVE的时候可删除
- alter database drop logfile group 11;
- alter database drop logfile group 12;
- alter database drop logfile group 13;
- alter database drop logfile group 14;
用grid用户,在$ORACLE_HOME/network/admin/listener.ora文件中添加,GLOBAL_DBNAME配置为db_unique_name,SID_NAME配置为该节点ORACLE_SID
节点1
- SID_LIST_LISTENER =
- (SID_LIST =
- (SID_DESC =
- (GLOBAL_DBNAME = zsorcl)
- (ORACLE_HOME = /home/u01/app/oracle/product/11.2.0/dbhome_1)
- (SID_NAME = zsorcl1)
- )
- )
节点2
- SID_LIST_LISTENER =
- (SID_LIST =
- (SID_DESC =
- (GLOBAL_DBNAME = zsorcl)
- (ORACLE_HOME = /home/u01/app/oracle/product/11.2.0/dbhome_1)
- (SID_NAME = zsorcl2)
- )
- )
重启监听
- lsnrctl stop
- lsnrctl start
oracle用户,在$ORACLE_HOME/network/admin/tnsnames.ora文件中添加,其中zsorcl_p的host配置为public ip、VIP,service_name配置为listener.ora文件中的GLOBAL_DBNAME也就是主库的db_unique_name,两个节点相同,zsorcl_s的service_name为备库db_unique_name
- zsorcl_p =
- (DESCRIPTION =
- (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.101)(PORT = 1521))
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.103)(PORT = 1521))
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.102)(PORT = 1521))
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.104)(PORT = 1521))
- )
- (CONNECT_DATA =
- (server=dedicated)
- (SERVICE_NAME = zsorcl)
- )
- )
-
- zsorcl_s =
- (DESCRIPTION =
- (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.110)(PORT = 1521))
- (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.23.112)(PORT = 1521))
- )
- (CONNECT_DATA =
- (server=dedicated)
- (SERVICE_NAME = zsorcldg)
- )
- )
测试连接
- tnsping zsorcl_p
- tnsping zsorcl_s
- 修改db_unique_name需要重启,如果不修改主库的这个参数,可以不用重启,ASM情况下最好不要修改,默认会减到OMF指定DG下的db_unique_name路径下,如果修改了那么convert参数需要注意设置全至少两个转换路径。
-
- #zsorcl是主库db_unique_name,zsorcldg是备库db_unique_name
- alter system set log_archive_config='dg_config=(zsorcl,zsorcldg)' scope=both sid='*';
-
- #配置本地归档
- alter system set log_archive_dest_1='location=+fra valid_for=(all_logfiles,all_roles) db_unique_name=zsorcl' scope=both sid='*';
- alter system set log_archive_dest_state_1=enable;
-
- #service是tnsnames.ora中备库的tns名
- alter system set log_archive_dest_2='service=zsorcl_s lgwr async valid_for=(online_logfiles,primary_role) db_unique_name=zsorcldg' scope=both sid='*';
- alter system set log_archive_dest_state_2=enable;
-
- alter system set fal_server=‘zsorcl_s’ scope=both sid='*';
-
- #只用于物理备库。该参数设置成AUTO的时候,主库中添加和删除数据文件的同时,备库中也会自动的进行相应的更改,默认情况值为MANUAL
- alter system set standby_file_management='AUTO' scope=both sid='*';
-
- alter system set fal_server='zsorcl_s' scope=both sid='*';
-
- #前面是备库的数据文件路径,后面是主库的数据文件路径,需要重启生效
- alter system set db_file_name_convert='+DATA','+DATA' scope=spfile sid='*';
-
- #前面是备库的日志文件路径,后面是主库的日子文件路径,需要重启生效
- alter system set log_file_name_convert='+DATA','+DATA' scope=spfile sid='*';
-
- #归档命名格式
- alter system set log_archive_format='%t_%s_%r.arch' scope=spfile sid='*';
保护模式
最大保护 | 最大可用 | 最大性能 | |
Redo archival process | LGWR | LGWR | LGWR or ARCH |
Network transmission mode | SYNC | SYNC | SYNC or ASYNC when using LGWR process. SYNC if using ARCH process |
Disk write option | AFFIRM | AFFIRM | AFFIRM or NOAFFIRM |
Standby redo log required? | YES | YES | No, but it is recommended |
- AFFIRM:表示主数据库上的REDO LOG只有被写入到从数据库的standby log才算有效。
- ASYNC(异步):在事务提交之前,由事务生成的重做数据不需要在具有该属性的目的地接收到。默认则为 ASYNC
- SYNC(同步):事务生成的重做数据必须在事务提交之前被每个启用了该属性的目的地接收。
用LGWR传输大致如下:
1)主库:只要有新的重做日志产生,LGWR进程将触发LNSn进程把新生成的日志传输到备库(注意:这里不能由LGWR直接传输,因为整个数据库实例只有一个LGWR,为了保证它的主要业务性能不受影响,不能由它直接传输);
2)备库:RFS进程接收到日志后,将其写入Standby重做日志,如果备库开启了实时应用,就立即做日志应用,如果没有开启,则等Standby重做日志归档后再应用。
3)其中,ASYNC和SYNC的区别在于:SYNC是在redo还在内存时,LNSn进程就开始传输,而ASYNC是在redo缓冲到online redo log后,LNSn才开始传输。ARCH方式
arch方式是指,主库在被操作过程中,操作记录不断写入联机重做日志文件中(redo日志中)直到该组日志容量被写满,触发归档进程生成归档日志,然后再将归档日志通过网络发送给备库。
从同步的实时性来看,LGWR(SYNC) > LGWR(ASYNC)> ARCH
复制主库密码文件至备库,备库修改密码文件名为orapwzsorcl1,备库的实例名叫zsorcl1
[oracle@rac11g_1 dbs]$ scp orapwzsorcl1 oracle@192.168.23.110:/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs
- [grid@rac11gstd admin]$ vi listener.ora
- #加入以下参数
- SID_LIST_LISTENER =
- (SID_LIST =
- (SID_DESC =
- (GLOBAL_DBNAME = zsorcldg)
- (ORACLE_HOME = /home/u01/app/oracle/product/11.2.0/dbhome_1)
- (SID_NAME = zsorcl1)
- )
- )
-
- #重启监听
- [grid@rac11gstd admin]$ lsnrctl reload
把主库的tnsnames.ora文件复制到备库
[oracle@rac11g_1 admin]$ scp tnsnames.ora oracle@192.168.23.110:/home/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/
主库创建pfile文件复制至备库/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs下修改
- [oracle@rac11g_1 ~]$sqlplus / as sysdba
- SQL> create pfile='/home/oracle/pfile_std.ora' from spfile;
- [oracle@rac11g_1 ~]$ scp pfile_std.ora oracle@192.168.23.110:/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs
备库pfile文件
- [oracle@rac11gstd dbs]$ cat pfile_std.ora
- *.cluster_database=true
- *.compatible='11.2.0.4.0'
- *.db_create_file_dest='+DATA'
- *.db_create_online_log_dest_1='+DATA'
- *.db_create_online_log_dest_2='+FRA'
- *.db_domain=''
- *.db_file_name_convert='+DATA','+DATA'
- *.db_name='zsorcl'
- *.db_unique_name='zsorcldg'
- *.diagnostic_dest='/home/u01/app/oracle'
- *.dispatchers='(PROTOCOL=TCP) (SERVICE=zsorclXDB)'
- *.fal_server='zsorcl_p'
- zsorcl1.instance_number=1
- *.log_archive_config='dg_config=(zsorcl,zsorcldg)'
- *.log_archive_dest_1='location=+fra valid_for=(all_logfiles,all_roles) db_unique_name=zsorcldg'
- *.log_archive_dest_2='service=zsorcl_p lgwr async valid_for=(online_logfiles,primary_role) db_unique_name=zsorcl'
- *.log_archive_format='%t_%s_%r.arch'
- *.log_file_name_convert='+DATA','+DATA'
- *.open_cursors=500
- *.processes=500
- *.remote_listener='rac11gstd-scan:1521'
- *.remote_login_passwordfile='exclusive'
- *.sessions=555
- *.standby_file_management='AUTO'
- zsorcl1.thread=1
- zsorcl1.undo_tablespace='UNDOTBS1'
启动至nomount状态
- [oracle@rac11gstd dbs]$ sqlplus / as sysdba
- SQL> startup nomount pfile='/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs/pfile_std.ora';
主库节点1 oracle用户执行
[oracle@rac11g_1 ~]$ rman target sys/oracle@zsorcl_p auxiliary sys/oracle@zsorcl_s
执行复制命令,这个命令可以直接恢复数据文件、standby日志文件和控制文件。
- RMAN> run {
- allocate channel ch1 type disk;
- allocate channel ch2 type disk;
- allocate channel ch3 type disk;
- allocate channel ch4 type disk;
- allocate auxiliary channel ch5 type disk;
- allocate auxiliary channel ch6 type disk;
- duplicate target database for standby from active database nofilenamecheck;
- release channel ch1;
- release channel ch2;
- release channel ch3;
- release channel ch4;
- release channel ch5;
- release channel ch6;
- }
- contents of Memory Script:
- {
- sql clone 'alter database mount standby database';
- }
- executing Memory Script
-
- sql statement: alter database mount standby database
- released channel: ch1
- released channel: ch2
- released channel: ch3
- released channel: ch4
- released channel: ch5
- released channel: ch6
- RMAN-00571: ===========================================================
- RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
- RMAN-00571: ===========================================================
- RMAN-03002: failure of Duplicate Db command at 06/19/2024 11:44:46
- RMAN-05501: aborting duplication of target database
- RMAN-03015: error occurred in stored script Memory Script
- RMAN-03009: failure of sql command on clone_default channel at 06/19/2024 11:44:46
- RMAN-11003: failure during parse/execution of SQL statement: alter database mount standby database
- ORA-00201: control file version 11.2.0.4.0 incompatible with ORACLE version 11.2.0.0.0
- ORA-00202: control file: '+DATA/zsorcldg/controlfile/current.256.1172087029'
-
- RMAN>
备库pfile里面一定要加版本,否则报版本不一致错误
注意:
由于备库是没有通过dbca建库,备库oracle用户的$ORACLE_HOME/bin/oracle权限不正确,复制过程会报错。如果可执行文件$ORACLE_HOME/bin/oracle的属主或权限设定出了问题,那么可能会造成很多问题。例如:无法登陆到数据库、ora-600错误、“TNS-12518: TNS:listener could not hand off client connection”、“Linux Error: 32: Broken pipe”、“ORA-12537: TNS:connection closed”、访问ASM磁盘出错等。解决办法很简单,可以在grid用户下运行setasmgidwrap命令重新配置oracle用户的$ORACLE_HOME/bin/oracle可执行文件的权限和属主或者直接将oracle文件的权限修改为6751。$ORACLE_HOME/bin/oracle可执行文件正确属主应该是oracle:asmadmin,并且权限必须有s才可以。我修改过后重启了备库服务器才复制成功,没重启的时候方法对了还是不行。
- #grid用户
- [grid@rac11gstd ~]$ setasmgidwrap -o /home/u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle
- #root用户
- [root@rac11gstd ~]# cd /home/u01/app/oracle/product/11.2.0/dbhome_1/bin
- [root@rac11gstd bin]# chmod 6751 oracle
- [root@rac11gstd bin]# ll oracle
- -rwsr-s--x 1 oracle asmadmin 239501536 Jun 14 14:10 oracle
备库完成上面文件属组和权限修改后,完整的复制过程
- [oracle@rac11g_1 ~]$ rman target sys/oracle@zsorcl_p auxiliary sys/oracle@zsorcl_s
-
- Recovery Manager: Release 11.2.0.4.0 - Production on Wed Jun 19 11:48:52 2024
-
- Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
-
- connected to target database: ZSORCL (DBID=3108552649)
- connected to auxiliary database: ZSORCL (not mounted)
-
- RMAN> run {
- allocate channel ch1 type disk;
- allocate channel ch2 type disk;
- allocate channel ch3 type disk;
- allocate channel ch4 type disk;
- allocate auxiliary channel ch5 type disk;
- allocate auxiliary channel ch6 type disk;
- duplicate target database for standby from active database nofilenamecheck;
- release channel ch1;
- release channel ch2;
- release channel ch3;
- release channel ch4;
- release channel ch5;
- release channel ch6;
- }
-
- using target database control file instead of recovery catalog
- allocated channel: ch1
- channel ch1: SID=402 instance=zsorcl1 device type=DISK
-
- allocated channel: ch2
- channel ch2: SID=594 instance=zsorcl1 device type=DISK
-
- allocated channel: ch3
- channel ch3: SID=202 instance=zsorcl1 device type=DISK
-
- allocated channel: ch4
- channel ch4: SID=403 instance=zsorcl1 device type=DISK
-
- allocated channel: ch5
- channel ch5: SID=10 instance=zsorcl1 device type=DISK
-
- allocated channel: ch6
- channel ch6: SID=203 instance=zsorcl1 device type=DISK
-
- Starting Duplicate Db at 2024-06-19 11:48:57
-
- contents of Memory Script:
- {
- backup as copy reuse
- targetfile '/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwzsorcl1' auxiliary format
- '/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwzsorcl1' ;
- }
- executing Memory Script
-
- Starting backup at 2024-06-19 11:48:57
- Finished backup at 2024-06-19 11:48:58
-
- contents of Memory Script:
- {
- backup as copy current controlfile for standby auxiliary format '+DATA/zsorcldg/controlfile/current.257.1172087339';
- restore clone controlfile to '+FRA/zsorcldg/controlfile/current.257.1172087339' from
- '+DATA/zsorcldg/controlfile/current.257.1172087339';
- sql clone "create spfile from memory";
- shutdown clone immediate;
- startup clone nomount;
- sql clone "alter system set control_files =
- ''+DATA/zsorcldg/controlfile/current.257.1172087339'', ''+FRA/zsorcldg/controlfile/current.257.1172087339'' comment=
- ''Set by RMAN'' scope=spfile";
- shutdown clone immediate;
- startup clone nomount;
- }
- executing Memory Script
-
- Starting backup at 2024-06-19 11:48:58
- channel ch1: starting datafile copy
- copying standby control file
- output file name=/home/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_zsorcl1.f tag=TAG20240619T114858 RECID=2 STAMP=1172058539
- channel ch1: datafile copy complete, elapsed time: 00:00:01
- Finished backup at 2024-06-19 11:49:00
-
- Starting restore at 2024-06-19 11:49:00
-
- channel ch6: skipped, AUTOBACKUP already found
- channel ch5: copied control file copy
- Finished restore at 2024-06-19 11:49:09
-
- sql statement: create spfile from memory
- Oracle instance shut down
- connected to auxiliary database (not started)
- Oracle instance started
- Total System Global Area 342425600 bytes
- Fixed Size 2253024 bytes
- Variable Size 285216544 bytes
- Database Buffers 50331648 bytes
- Redo Buffers 4624384 bytes
- allocated channel: ch5
- channel ch5: SID=580 instance=zsorcl1 device type=DISK
- allocated channel: ch6
- channel ch6: SID=10 instance=zsorcl1 device type=DISK
-
- sql statement: alter system set control_files = ''+DATA/zsorcldg/controlfile/current.257.1172087339'', ''+FRA/zsorcldg/controlfile/current.257.1172087339'' comment= ''Set by RMAN'' scope=spfile
- Oracle instance shut down
- connected to auxiliary database (not started)
- Oracle instance started
- Total System Global Area 342425600 bytes
- Fixed Size 2253024 bytes
- Variable Size 285216544 bytes
- Database Buffers 50331648 bytes
- Redo Buffers 4624384 bytes
- allocated channel: ch5
- channel ch5: SID=580 instance=zsorcl1 device type=DISK
- allocated channel: ch6
- channel ch6: SID=10 instance=zsorcl1 device type=DISK
-
- contents of Memory Script:
- {
- sql clone 'alter database mount standby database';
- }
- executing Memory Script
-
- sql statement: alter database mount standby database
- RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.
-
- contents of Memory Script:
- {
- set newname for tempfile 1 to
- "+data";
- switch clone tempfile all;
- set newname for datafile 1 to
- "+data";
- set newname for datafile 2 to
- "+data";
- set newname for datafile 3 to
- "+data";
- set newname for datafile 4 to
- "+data";
- set newname for datafile 5 to
- "+data";
- backup as copy reuse
- datafile 1 auxiliary format
- "+data" datafile
- 2 auxiliary format
- "+data" datafile
- 3 auxiliary format
- "+data" datafile
- 4 auxiliary format
- "+data" datafile
- 5 auxiliary format
- "+data" ;
- sql 'alter system archive log current';
- }
- executing Memory Script
-
- executing command: SET NEWNAME
-
- renamed tempfile 1 to +data in control file
-
- executing command: SET NEWNAME
-
- executing command: SET NEWNAME
-
- executing command: SET NEWNAME
-
- executing command: SET NEWNAME
-
- executing command: SET NEWNAME
-
- Starting backup at 2024-06-19 11:50:02
- channel ch1: starting datafile copy
- input datafile file number=00003 name=+DATA/zsorcl/datafile/undotbs1.261.1171638355
- channel ch2: starting datafile copy
- input datafile file number=00001 name=+DATA/zsorcl/datafile/system.259.1171638351
- channel ch3: starting datafile copy
- input datafile file number=00002 name=+DATA/zsorcl/datafile/sysaux.260.1171638353
- channel ch4: starting datafile copy
- input datafile file number=00004 name=+DATA/zsorcl/datafile/undotbs2.263.1171638361
- output file name=+DATA/zsorcldg/datafile/undotbs2.261.1172087403 tag=TAG20240619T115002
- channel ch4: datafile copy complete, elapsed time: 00:00:15
- channel ch4: starting datafile copy
- input datafile file number=00005 name=+DATA/zsorcl/datafile/users.264.1171638361
- output file name=+DATA/zsorcldg/datafile/system.259.1172087403 tag=TAG20240619T115002
- channel ch2: datafile copy complete, elapsed time: 00:00:16
- output file name=+DATA/zsorcldg/datafile/users.262.1172087419 tag=TAG20240619T115002
- channel ch4: datafile copy complete, elapsed time: 00:00:01
- output file name=+DATA/zsorcldg/datafile/undotbs1.258.1172087403 tag=TAG20240619T115002
- channel ch1: datafile copy complete, elapsed time: 00:00:19
- output file name=+DATA/zsorcldg/datafile/sysaux.260.1172087403 tag=TAG20240619T115002
- channel ch3: datafile copy complete, elapsed time: 00:00:18
- Finished backup at 2024-06-19 11:50:21
-
- sql statement: alter system archive log current
-
- contents of Memory Script:
- {
- switch clone datafile all;
- }
- executing Memory Script
-
- datafile 1 switched to datafile copy
- input datafile copy RECID=2 STAMP=1172087428 file name=+DATA/zsorcldg/datafile/system.259.1172087403
- datafile 2 switched to datafile copy
- input datafile copy RECID=3 STAMP=1172087428 file name=+DATA/zsorcldg/datafile/sysaux.260.1172087403
- datafile 3 switched to datafile copy
- input datafile copy RECID=4 STAMP=1172087428 file name=+DATA/zsorcldg/datafile/undotbs1.258.1172087403
- datafile 4 switched to datafile copy
- input datafile copy RECID=5 STAMP=1172087428 file name=+DATA/zsorcldg/datafile/undotbs2.261.1172087403
- datafile 5 switched to datafile copy
- input datafile copy RECID=6 STAMP=1172087429 file name=+DATA/zsorcldg/datafile/users.262.1172087419
- Finished Duplicate Db at 2024-06-19 11:50:37
-
- released channel: ch1
-
- released channel: ch2
-
- released channel: ch3
-
- released channel: ch4
-
- released channel: ch5
-
- released channel: ch6
-
- RMAN>
打开备库并且开启实时日志应用
- #关闭数据库
- shutdown immediate
-
- #启动数据库
- startup
-
- #查看当前open_mode,备库是read only状态
- select database_role,protection_mode,protection_level,open_mode from v$database;
-
- #查看归档日志进程
- select process,client_process,sequence#,status from v$managed_standby;
-
- #备库开启日志实时应用
- recover managed standby database using current logfile disconnect from session;
-
- #查看归档日志进程,此时多了一个进程MRP0
- select process,client_process,sequence#,status from v$managed_standby;
备库应用主库日志时有无using current logfile选项的区别
- 无using current logfile,主库切换日志后,备库才从归档文件挖掘出变化,然后应用到库文件中。
- 有using current logfile,备库根据接收到的redo信息,实时应用到备库上,即便是最大性能。
主库操作
- create user zs identified by oracle;
-
- grant dba to zs;
-
- conn zs/oracle
- create table test_sync(id number);
-
- select count(*) from test_sync;
-
- COUNT(*)
- ----------
- 0
-
- begin
- for i in 1..10000 loop
- insert into test_sync values (520);
- end loop;
- end;
- /
-
- commit;
-
- select count(*) from test_sync;
-
- COUNT(*)
- ----------
- 10000
备库查看数据是否过来
- select count(*) from test_sync;
-
- COUNT(*)
- ----------
- 10000
ADG验证完毕
主库shutdown >> 备库取消应用归档日志 >> 关闭备库 >> 关闭主库和备库的监听
- 1:主库上:SQL> shutdown immediate
-
- 2:备库上:SQL> alter database recover managed standby database cancel;
-
- 3:备库上:SQL> shutdown immediate
-
- 4:停止监听
-
- 主库:
- [grid@rac11g_1 ~]$ lsnrctl stop
- [grid@rac11g_2 ~]$ lsnrctl stop
-
- 备库:
- [grid@rac11gstd ~]$ lsnrctl stop
启动主库和备库监听 >> 启动备库 >> 启动主库 >> 切换主库日志
- 1:主库和备库:
- [grid@rac11g_1 ~]$ lsnrctl start
- [grid@rac11g_2 ~]$ lsnrctl start
-
- [grid@rac11gstd ~]$ lsnrctl start
-
- 2:启动备库:
- SQL> startup nomount
- SQL> alter database mount standby database;
- SQL> alter database open;
- SQL> alter database recover managed standby database using current logfile disconnect from session;
-
- 3:启动主库:
- SQL> startup
至此oracle 11g rac + rac 单节点 ADG搭建完毕
查看保护模式
select database_role,protection_mode,protection_level,open_mode from v$database;
查看归档日志进程
select process,client_process,sequence#,status from v$managed_standby;
查看备库已经归档的redo
SELECT REGISTRAR, CREATOR, THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$ARCHIVED_LOG;
查看备库已经应用的 redo
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$LOG_HISTORY;
查看备库接收 , 应用redo数据过程
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
查看备库上的日志接收情况
select status,target,archiver,error,process from v$archive_dest;
查看当前序列号
archive log list;
在备库查看延迟情况
select * from v$dataguard_stats;
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。