赞
踩
Oracle公有云:Oracle Cloud Infrastructure
操作系统Oracle Linux 7,运行容器数据库,数据库企业版,RAC,版本为19.3.0,实例名为ORCLCDB,带一个可插拔数据库orclpdb1。两个RAC节点均运行于同一主机。
创建Linux实例,配置为VM.Standard2.2,启动卷定制为200G。
创建1个50G的块存储并挂载到Linux实例,设备名为sdb。
块设备信息如下:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk
sda 8:0 0 200G 0 disk
├─sda2 8:2 0 8G 0 part [SWAP]
├─sda3 8:3 0 38.4G 0 part /
└─sda1 8:1 0 200M 0 part /boot/efi
此时的文件系统还是默认分区:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 8.7M 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda3 39G 7.9G 31G 21% /
/dev/sda1 200M 9.7M 191M 5% /boot/efi
tmpfs 3.0G 0 3.0G 0% /run/user/1000
因此需要对root partition扩容,可参照How to resize root partition of Oracle Linux instance in Oracle Cloud Infrastructure (OCI) (文档 ID 2488082.1)
。
过程如下:
# rpm -qa |grep growpart cloud-utils-growpart-0.29-5.el7.noarch # growpart /dev/sda 3 CHANGED: partition=3 start=17188864 old: size=80486400 end=97675264 new: size=402241502 end=419430366 # xfs_growfs / meta-data=/dev/sda3 isize=256 agcount=4, agsize=2515200 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 rmapbt=0 = reflink=0 data = bsize=4096 blocks=10060800, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=4912, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 10060800 to 50280187 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 50G 0 disk sda 8:0 0 200G 0 disk ├─sda2 8:2 0 8G 0 part [SWAP] ├─sda3 8:3 0 191.8G 0 part / └─sda1 8:1 0 200M 0 part /boot/efi # df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 15G 0 15G 0% /dev tmpfs 15G 0 15G 0% /dev/shm tmpfs 15G 8.7M 15G 1% /run tmpfs 15G 0 15G 0% /sys/fs/cgroup /dev/sda3 192G 7.9G 184G 5% / /dev/sda1 200M 9.7M 191M 5% /boot/efi tmpfs 3.0G 0 3.0G 0% /run/user/1000
以下操作均登入Linux中运行。
sudo yum install -y yum-utils
sudo yum-config-manager --enable ol7_addons
sudo yum install -y docker-engine
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker opc
确认docker安装成功:
$ docker version Client: Docker Engine - Community Version: 18.09.8-ol API version: 1.39 Go version: go1.10.8 Git commit: 76804b7 Built: Fri Sep 27 21:00:18 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.8-ol API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 76804b7 Built: Fri Sep 27 20:54:00 2019 OS/Arch: linux/amd64 Experimental: false Default Registry: docker.io
sudo yum install -y git
git clone https://github.com/oracle/docker-images.git
docker会从Host OS继承参数,因此需在文件/etc/sysctl.conf
中设置以下参数:
fs.file-max = 6815744
net.core.rmem_max = 4194304
net.core.rmem_default = 262144
net.core.wmem_max = 1048576
net.core.wmem_default = 262144
net.core.rmem_default = 262144
使其生效:
sudo sysctl -a
sudo sysctl -p
docker network create --driver=bridge --subnet=172.16.1.0/24 rac_pub1_nw
docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw
查看状态:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
aac5636fe8fc bridge bridge local
051a1439f036 host host local
1a9007862a18 none null local
fe35e54e1aa0 rac_priv1_nw bridge local
0c6bdebeab78 rac_pub1_nw bridge local
RAC的某些进程需要运行在实时模式,因此需要在文件/etc/sysconfig/docker
中添加以下:
OPTIONS='--selinux-enabled --cpu-rt-runtime=950000'
使其生效:
sudo systemctl daemon-reload
sudo systemctl stop docker
sudo systemctl start docker
SELINUX 配置为 permissive模式(/etc/selinux/config),过程略。
然后重启实例使得SELINUX生效。
此时的空间状态:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 8.6M 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda3 192G 7.9G 184G 5% /
/dev/sda1 200M 9.7M 191M 5% /boot/efi
tmpfs 3.0G 0 3.0G 0% /run/user/1000
在云主机上拷贝耗时0m13.417s:
cd docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/19.3.0
cp /vagrant/LINUX.X64_193000_db_home.zip .
cp /vagrant/LINUX.X64_193000_grid_home.zip .
此时的空间状态:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 8.6M 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda3 192G 14G 179G 8% /
/dev/sda1 200M 9.7M 191M 5% /boot/efi
tmpfs 3.0G 0 3.0G 0% /run/user/1000
这一步最重要的任务就是拷贝介质和配置脚本,还有从网络下载OS更新。然后安装GI和数据库。
执行以下命令开始构建:
$ cd docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles
$ ls
12.2.0.1 18.3.0 19.3.0 buildDockerImage.sh
$ time ./buildDockerImage.sh -v 19.3.0
如果空间不够,会报错:
...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
checkSpace.sh: ERROR - There is not enough space available in the docker container.
checkSpace.sh: The container needs at least 35 GB , but only 14 available.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
...
There was an error building the image.
以下是成功时的完整日志,整个过程耗时17分钟:
Checking if required packages are present and valid... LINUX.X64_193000_grid_home.zip: OK LINUX.X64_193000_db_home.zip: OK ========================== DOCKER info: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.09.8-ol Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39 runc version: 4bb1fe4ace1a32d3676bb98f5d3b6a4e32bf6c58 init version: fec3683 Security Options: seccomp Profile: default selinux Kernel Version: 4.14.35-1902.6.6.el7uek.x86_64 Operating System: Oracle Linux Server 7.7 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 29.18GiB Name: instance-20191112-1400-docker ID: U7QK:QGCV:JXIT:GDAA:6UKP:HYX7:WLIY:5IPB:ITZS:5C6U:CJFC:POWW Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine Registries: docker.io (secure) ========================== Building image 'oracle/database-rac:19.3.0' ... Sending build context to Docker daemon 5.949GB Step 1/11 : FROM oraclelinux:7-slim Trying to pull repository docker.io/library/oraclelinux ... 7-slim: Pulling from docker.io/library/oraclelinux a316717fc6ee: Pull complete Digest: sha256:c5f3baff726ffd97c7e9574e803ad0e8a1e5c7de236325eed9e87f853a746e90 Status: Downloaded newer image for oraclelinux:7-slim ---> 874477adb545 Step 2/11 : MAINTAINER Paramdeep Saini <paramdeep.saini@oracle.com> ---> Running in 565cb0eb8d8c Removing intermediate container 565cb0eb8d8c ---> 0ae5d2666d12 Step 3/11 : ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1="LINUX.X64_193000_grid_home.zip" GRID_INSTALL_RSP="gridsetup_19c.rsp" GRID_SW_INSTALL_RSP="grid_sw_install_19c.rsp" GRID_SETUP_FILE="setupGrid.sh" FIXUP_PREQ_FILE="fixupPreq.sh" INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" INSTALL_GRID_PATCH="applyGridPatch.sh" INVENTORY=/u01/app/oraInventory CONFIGGRID="configGrid.sh" ADDNODE="AddNode.sh" DELNODE="DelNode.sh" ADDNODE_RSP="grid_addnode.rsp" SETUPSSH="setupSSH.expect" DOCKERORACLEINIT="dockeroracleinit" GRID_USER_HOME="/home/grid" SETUPGRIDENV="setupGridEnv.sh" ASM_DISCOVERY_DIR="/dev" RESET_OS_PASSWORD="resetOSPassword.sh" MULTI_NODE_INSTALL="MultiNodeInstall.py" DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2="LINUX.X64_193000_db_home.zip" DB_INSTALL_RSP="db_sw_install_19c.rsp" DBCA_RSP="dbca_19c.rsp" DB_SETUP_FILE="setupDB.sh" PWD_FILE="setPassword.sh" RUN_FILE="runOracle.sh" STOP_FILE="stopOracle.sh" ENABLE_RAC_FILE="enableRAC.sh" CHECK_DB_FILE="checkDBStatus.sh" USER_SCRIPTS_FILE="runUserScripts.sh" REMOTE_LISTENER_FILE="remoteListener.sh" INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" GRID_HOME_CLEANUP="GridHomeCleanup.sh" ORACLE_HOME_CLEANUP="OracleHomeCleanup.sh" DB_USER="oracle" GRID_USER="grid" FUNCTIONS="functions.sh" COMMON_SCRIPTS="/common_scripts" CHECK_SPACE_FILE="checkSpace.sh" RESET_FAILED_UNITS="resetFailedUnits.sh" SET_CRONTAB="setCrontab.sh" CRONTAB_ENTRY="crontabEntry" EXPECT="/usr/bin/expect" BIN="/usr/sbin" container="true" ---> Running in c4f7ada4af5b Removing intermediate container c4f7ada4af5b ---> f736cc3d01c3 Step 4/11 : ENV INSTALL_SCRIPTS=$INSTALL_DIR/install PATH=/bin:/usr/bin:/sbin:/usr/sbin:$PATH SCRIPT_DIR=$INSTALL_DIR/startup GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:/usr/sbin:$PATH DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:/usr/sbin:$PATH GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib ---> Running in 3aaf07070341 Removing intermediate container 3aaf07070341 ---> aa0c18835555 Step 5/11 : COPY $GRID_SW_INSTALL_RSP $INSTALL_GRID_PATCH $SETUP_LINUX_FILE $GRID_SETUP_FILE $INSTALL_GRID_BINARIES_FILE $FIXUP_PREQ_FILE $DB_SETUP_FILE $CHECK_SPACE_FILE $DB_INSTALL_RSP $INSTALL_DB_BINARIES_FILE $ENABLE_RAC_FILE $GRID_HOME_CLEANUP $ORACLE_HOME_CLEANUP $INSTALL_FILE_1 $INSTALL_FILE_2 $INSTALL_SCRIPTS/ ---> 09b35208cfd4 Step 6/11 : COPY $RUN_FILE $ADDNODE $ADDNODE_RSP $SETUPSSH $FUNCTIONS $CONFIGGRID $GRID_INSTALL_RSP $DBCA_RSP $PWD_FILE $CHECK_DB_FILE $USER_SCRIPTS_FILE $STOP_FILE $CHECK_DB_FILE $REMOTE_LISTENER_FILE $SETUPGRIDENV $DELNODE $RESET_OS_PASSWORD $MULTI_NODE_INSTALL $SCRIPT_DIR/ ---> 7f73cccc5246 Step 7/11 : RUN chmod 755 $INSTALL_SCRIPTS/*.sh && sync && $INSTALL_DIR/install/$CHECK_SPACE_FILE && $INSTALL_DIR/install/$SETUP_LINUX_FILE && $INSTALL_DIR/install/$GRID_SETUP_FILE && $INSTALL_DIR/install/$DB_SETUP_FILE && sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-19c.conf && su $GRID_USER -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && $INVENTORY/orainstRoot.sh && $GRID_HOME/root.sh && su $DB_USER -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && su $DB_USER -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && $INVENTORY/orainstRoot.sh && $DB_HOME/root.sh && su $GRID_USER -c "$INSTALL_SCRIPTS/$GRID_HOME_CLEANUP" && su $DB_USER -c "$INSTALL_SCRIPTS/$ORACLE_HOME_CLEANUP" && $INSTALL_DIR/install/$FIXUP_PREQ_FILE && rm -rf $INSTALL_DIR/install && rm -rf $INSTALL_DIR/install && sync && chmod 755 $SCRIPT_DIR/*.sh && chmod 755 $SCRIPT_DIR/*.expect && chmod 666 $SCRIPT_DIR/*.rsp && echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && rm -f /etc/rc.d/init.d/oracle-database-preinstall-19c-firstboot && mkdir -p $GRID_HOME/dockerinit && cp $GRID_HOME/bin/$DOCKERORACLEINIT $GRID_HOME/dockerinit/ && chown $GRID_USER:oinstall $GRID_HOME/dockerinit && chown root:oinstall $GRID_HOME/dockerinit/$DOCKERORACLEINIT && chmod 4755 $GRID_HOME/dockerinit/$DOCKERORACLEINIT && ln -s $GRID_HOME/dockerinit/$DOCKERORACLEINIT /usr/sbin/oracleinit && chmod +x /etc/rc.d/rc.local && rm -f /etc/sysctl.d/99-oracle-database-preinstall-19c-sysctl.conf && rm -f /etc/sysctl.d/99-sysctl.conf && sync ---> Running in f108cee70439 Loaded plugins: ovl No package openssh-client available. Resolving Dependencies --> Running transaction check ---> Package e2fsprogs.x86_64 0:1.42.9-16.el7 will be installed ... Complete! Loaded plugins: ovl Cleaning repos: ol7_UEKR5 ol7_developer_EPEL ol7_latest /opt/scripts/install/installGridBinaries.sh: line 57: : command not found Launching Oracle Grid Infrastructure Setup Wizard... [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. gridSetupActions2019-11-12_06-50-50AM.log ACTION: Identify the list of failed prerequisite checks from the log: gridSetupActions2019-11-12_06-50-50AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at: /u01/app/19.3.0/grid/install/response/grid_2019-11-12_06-50-50AM.rsp You can find the log of this install session at: /tmp/GridSetupActions2019-11-12_06-50-50AM/gridSetupActions2019-11-12_06-50-50AM.log As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/19.3.0/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [f108cee70439] Execute /u01/app/19.3.0/grid/root.sh on the following nodes: [f108cee70439] Successfully Setup Software with warning(s). Moved the install session logs to: /u01/app/oraInventory/logs/GridSetupActions2019-11-12_06-50-50AM Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. Check /u01/app/19.3.0/grid/install/root_f108cee70439_2019-11-12_06-52-01-423025811.log for the output of root script Launching Oracle Database Setup Wizard... [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/InstallActions2019-11-12_06-53-12AM/installActions2019-11-12_06-53-12AM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/InstallActions2019-11-12_06-53-12AM/installActions2019-11-12_06-53-12AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at: /u01/app/oracle/product/19.3.0/dbhome_1/install/response/db_2019-11-12_06-53-12AM.rsp You can find the log of this install session at: /u01/app/oraInventory/logs/InstallActions2019-11-12_06-53-12AM/installActions2019-11-12_06-53-12AM.log As a root user, execute the following script(s): 1. /u01/app/oracle/product/19.3.0/dbhome_1/root.sh Execute /u01/app/oracle/product/19.3.0/dbhome_1/root.sh on the following nodes: [f108cee70439] Successfully Setup Software with warning(s). (if /u01/app/oracle/product/19.3.0/dbhome_1/bin/skgxpinfo | grep rds;\ then \ make -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_rds; \ else \ make -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_g; \ fi) make[1]: Entering directory `/' rm -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxp19.so cp /u01/app/oracle/product/19.3.0/dbhome_1/lib//libskgxpg.so /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxp19.so make[1]: Leaving directory `/' - Use stub SKGXN library cp /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxns.so /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxn2.so /usr/bin/ar d /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a ksnkcs.o /usr/bin/ar cr /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/kcsm.o chmod 755 /u01/app/oracle/product/19.3.0/dbhome_1/bin - Linking Oracle rm -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/19.3.0/dbhome_1/bin/orald -o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/19.3.0/dbhome_1/lib/ -L/u01/app/oracle/product/19.3.0/dbhome_1/lib/stubs/ -Wl,-E /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv19 -Wl,--no-whole-archive /u01/app/oracle/product/19.3.0/dbhome_1/lib/nautab.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naeet.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naect.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naedhs.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/config.o -ldmext -lserver19 -lodm19 -lofs -lcell19 -lnnet19 -lskgxp19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lclient19 -lvsnst19 -lcommon19 -lgeneric19 -lknlopt -loraolap19 -lskjcx19 -lslax19 -lpls19 -lrt -lplp19 -ldmext -lserver19 -lclient19 -lvsnst19 -lcommon19 -lgeneric19 `if [ -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libavserver19.a ] ; then echo "-lavserver19" ; else echo "-lavstub19"; fi` `if [ -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libavclient19.a ] ; then echo "-lavclient19" ; fi` -lknlopt -lslax19 -lpls19 -lrt -lplp19 -ljavavm19 -lserver19 -lwwg `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19 -lmm -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lztkg19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 `if /usr/bin/ar tv /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo19 -lserver19"; fi` -L/u01/app/oracle/product/19.3.0/dbhome_1/ctx/lib/ -lctxc19 -lctx19 -lzx19 -lgx19 -lctx19 -lzx19 -lgx19 -lclscest19 -loevm -lclsra19 -ldbcfg19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19 -locr19 -locrb19 -locrutl19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19 -lgeneric19 -lorazip -loraz -llzopro5 -lorabz2 -lorazstd -loralz4 -lipp_z -lipp_bz2 -lippdc -lipps -lippcore -lippcp -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lsnls19 -lunls19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lasmclnt19 -lcommon19 -lcore19 -ledtn19 -laio -lons -lmql1 -lipc1 -lfthread19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/19.3.0/dbhome_1/lib -lm `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/sysliblist` -ldl -lm -L/u01/app/oracle/product/19.3.0/dbhome_1/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs` rm -f /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle mv /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle chmod 6751 /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle (if [ ! -f /u01/app/oracle/product/19.3.0/dbhome_1/bin/crsd.bin ]; then \ getcrshome="/u01/app/oracle/product/19.3.0/dbhome_1/srvm/admin/getcrshome" ; \ if [ -f "$getcrshome" ]; then \ crshome="`$getcrshome`"; \ if [ -n "$crshome" ]; then \ if [ $crshome != /u01/app/oracle/product/19.3.0/dbhome_1 ]; then \ oracle="/u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle"; \ $crshome/bin/setasmgidwrap oracle_binary_path=$oracle; \ fi \ fi \ fi \ fi\ ); Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. Check /u01/app/oracle/product/19.3.0/dbhome_1/install/root_f108cee70439_2019-11-12_06-54-50-216820267.log for the output of root script Preparing... ######################################## Updating / installing... cvuqdisk-1.0.10-1 ######################################## Removing intermediate container f108cee70439 ---> 1499832d7c36 Step 8/11 : USER grid ---> Running in beab30065475 Removing intermediate container beab30065475 ---> 02d660710bda Step 9/11 : WORKDIR /home/grid ---> Running in c726a9c65f56 Removing intermediate container c726a9c65f56 ---> e2df8c6a2349 Step 10/11 : VOLUME ["/common_scripts"] ---> Running in 1a36e19935bd Removing intermediate container 1a36e19935bd ---> 9171568f0bb7 Step 11/11 : CMD ["/usr/sbin/oracleinit"] ---> Running in 98cf4acff70a Removing intermediate container 98cf4acff70a ---> 29e8018e9d71 Successfully built 29e8018e9d71 Successfully tagged oracle/database-rac:19.3.0 Oracle Database Docker Image for Real Application Clusters (RAC) version 19.3.0 is ready to be extended: --> oracle/database-rac:19.3.0 Build completed in 959 seconds. real 16m59.020s user 0m18.100s sys 0m10.401s
至此,数据库的RAC docker image就绪。linux image使用的瘦身版。
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/database-rac 19.3.0 049f87053beb About an hour ago 20.6GB
oraclelinux 7-slim 874477adb545 3 months ago 118MB
此时的空间状态:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 8.7M 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda3 192G 33G 159G 18% /
/dev/sda1 200M 9.7M 191M 5% /boot/efi
tmpfs 3.0G 0 3.0G 0% /run/user/1000
tmpfs 3.0G 0 3.0G 0% /run/user/0
实际上,此时可以删除数据库和GI的安装介质了。
sudo mkdir /opt/containers
sudo touch /opt/containers/rac_host_file
就是最开始分配的sdb:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk
sda 8:0 0 200G 0 disk
├─sda2 8:2 0 8G 0 part [SWAP]
├─sda3 8:3 0 191.8G 0 part /
└─sda1 8:1 0 200M 0 part /boot/efi
初始化磁盘以确保其上无文件系统:
$ sudo dd if=/dev/zero of=/dev/sdb bs=8k count=10000
以下设置的口令为oracle和grid操作系统用户以及数据库共同使用。
mkdir /opt/.secrets/
openssl rand -hex 64 -out /opt/.secrets/pwd.key
-- 将口令明码写入临时文件
echo Oracle.123# >/opt/.secrets/common_os_pwdfile
-- 加密后存储
openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key
-- 删除临时文件
rm -f /opt/.secrets/common_os_pwdfile
先创建容器:
docker create -t -i \ --hostname racnode1 \ --volume /boot:/boot:ro \ --volume /dev/shm \ --tmpfs /dev/shm:rw,exec,size=4G \ --volume /opt/containers/rac_host_file:/etc/hosts \ --volume /opt/.secrets:/run/secrets \ --dns-search=example.com \ --device=/dev/sdb:/dev/asm_disk1 \ --privileged=false \ --cap-add=SYS_NICE \ --cap-add=SYS_RESOURCE \ --cap-add=NET_ADMIN \ -e NODE_VIP=172.16.1.160 \ -e VIP_HOSTNAME=racnode1-vip \ -e PRIV_IP=192.168.17.150 \ -e PRIV_HOSTNAME=racnode1-priv \ -e PUBLIC_IP=172.16.1.150 \ -e PUBLIC_HOSTNAME=racnode1 \ -e SCAN_NAME=racnode-scan \ -e SCAN_IP=172.16.1.70 \ -e OP_TYPE=INSTALL \ -e DOMAIN=example.com \ -e ASM_DEVICE_LIST=/dev/asm_disk1 \ -e ASM_DISCOVERY_DIR=/dev \ -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ -e PWD_KEY=pwd.key \ --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --cpu-rt-runtime=95000 --ulimit rtprio=99 \ --name racnode1 \ oracle/database-rac:19.3.0
查看状态:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd8b3c79e88f oracle/database-rac:19.3.0 "/usr/sbin/oracleinit" 43 seconds ago Created racnode1
配置racnode1的网络:
docker network disconnect bridge racnode1
docker network connect rac_pub1_nw --ip 172.16.1.150 racnode1
docker network connect rac_priv1_nw --ip 192.168.17.150 racnode1
启动第一个容器:
docker start racnode1
查看日志:
docker logs -f racnode1
在容器内部或在宿主机都可以查看到dbca 进程,这个比较神奇。
以下命令可登录到容器内部:
docker exec -it racnode1 bash
在/u01/app/oracle/cfgtoollogs/dbca/ORCLCDB
中可以查看到dbca的日志。
以下为成功执行时的完整日志,另外整个过程耗时42分钟:
$ docker logs -f racnode1 PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=racnode1 TERM=xterm NODE_VIP=172.16.1.160 VIP_HOSTNAME=racnode1-vip PRIV_IP=192.168.17.150 PRIV_HOSTNAME=racnode1-priv PUBLIC_IP=172.16.1.150 PUBLIC_HOSTNAME=racnode1 SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 OP_TYPE=INSTALL DOMAIN=example.com ASM_DEVICE_LIST=/dev/asm_disk1 ASM_DISCOVERY_DIR=/dev COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip GRID_INSTALL_RSP=gridsetup_19c.rsp GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/u01/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2=LINUX.X64_193000_db_home.zip DB_INSTALL_RSP=db_sw_install_19c.rsp DBCA_RSP=dbca_19c.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh GRID_HOME_CLEANUP=GridHomeCleanup.sh ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh DB_USER=oracle GRID_USER=grid FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh RESET_FAILED_UNITS=resetFailedUnits.sh SET_CRONTAB=setCrontab.sh CRONTAB_ENTRY=crontabEntry EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directory systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64. Welcome to Oracle Linux Server 7.6! Set hostname to <racnode1>. Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target Swap. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Created slice Root Slice. [ OK ] Listening on Journal Socket. [ OK ] Created slice System Slice. Starting Journal Service... [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Listening on Delayed Shutdown Socket. Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Created slice system-getty.slice. [ OK ] Reached target RPC Port Mapper. [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. Starting Configure read-only root support... Starting Rebuild Hardware Database... Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Reached target Local File Systems (Pre). [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. Starting Flush Journal to Persistent Storage... [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Rebuild Journal Catalog... Starting Mark the need to relabel after reboot... Starting Preprocess NFS configuration... Starting Load/Save Random Seed... [ OK ] Started Flush Journal to Persistent Storage. Starting Create Volatile Files and Directories... [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Load/Save Random Seed. [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [ OK ] Started Update UTMP about System Boot/Shutdown. [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Reached target Sockets. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. Starting GSSAPI Proxy Daemon... Starting Self Monitoring and Reporting Technology (SMART) Daemon... Starting Resets System Activity Logs... [ OK ] Started D-Bus System Message Bus. Starting LSB: Bring up/down networking... Starting OpenSSH Server Key Generation... Starting Login Service... [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started RPC bind service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Started Resets System Activity Logs. Starting Cleanup of Temporary Directories... [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started Login Service. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Notify NFS peers of a restart. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started OpenSSH server daemon. 11-12-2019 07:11:38 UTC : : Process id of the program : 11-12-2019 07:11:38 UTC : : ################################################# 11-12-2019 07:11:38 UTC : : Starting Grid Installation 11-12-2019 07:11:38 UTC : : ################################################# 11-12-2019 07:11:38 UTC : : Pre-Grid Setup steps are in process 11-12-2019 07:11:38 UTC : : Process id of the program : 11-12-2019 07:11:38 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 07:11:38 UTC : : Resetting Failed Services 11-12-2019 07:11:38 UTC : : Sleeping for 60 seconds [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes. Oracle Linux Server 7.6 Kernel 4.14.35-1902.6.6.el7uek.x86_64 on an x86_64 racnode1 login: 11-12-2019 07:12:38 UTC : : Systemctl state is running! 11-12-2019 07:12:38 UTC : : Setting correct permissions for /bin/ping 11-12-2019 07:12:38 UTC : : Public IP is set to 172.16.1.150 11-12-2019 07:12:38 UTC : : RAC Node PUBLIC Hostname is set to racnode1 11-12-2019 07:12:38 UTC : : Preparing host line for racnode1 11-12-2019 07:12:38 UTC : : Adding \n172.16.1.150\tracnode1.example.com\tracnode1 to /etc/hosts 11-12-2019 07:12:38 UTC : : Preparing host line for racnode1-priv 11-12-2019 07:12:38 UTC : : Adding \n192.168.17.150\tracnode1-priv.example.com\tracnode1-priv to /etc/hosts 11-12-2019 07:12:38 UTC : : Preparing host line for racnode1-vip 11-12-2019 07:12:38 UTC : : Adding \n172.16.1.160\tracnode1-vip.example.com\tracnode1-vip to /etc/hosts 11-12-2019 07:12:38 UTC : : Preparing host line for racnode-scan 11-12-2019 07:12:38 UTC : : Adding \n172.16.1.70\tracnode-scan.example.com\tracnode-scan to /etc/hosts 11-12-2019 07:12:38 UTC : : Preapring Device list 11-12-2019 07:12:38 UTC : : Changing Disk permission and ownership /dev/asm_disk1 11-12-2019 07:12:38 UTC : : DNS_SERVERS is set to empty. /etc/resolv.conf will use default dns docker embedded server. 11-12-2019 07:12:38 UTC : : ##################################################################### 11-12-2019 07:12:38 UTC : : RAC setup will begin in 2 minutes 11-12-2019 07:12:38 UTC : : #################################################################### 11-12-2019 07:12:40 UTC : : ################################################### 11-12-2019 07:12:40 UTC : : Pre-Grid Setup steps completed 11-12-2019 07:12:40 UTC : : ################################################### 11-12-2019 07:12:40 UTC : : Checking if grid is already configured 11-12-2019 07:12:40 UTC : : Process id of the program : 11-12-2019 07:12:40 UTC : : Public IP is set to 172.16.1.150 11-12-2019 07:12:40 UTC : : RAC Node PUBLIC Hostname is set to racnode1 11-12-2019 07:12:40 UTC : : Domain is defined to example.com 11-12-2019 07:12:40 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 11-12-2019 07:12:40 UTC : : RAC VIP set to 172.16.1.160 11-12-2019 07:12:40 UTC : : RAC Node VIP hostname is set to racnode1-vip 11-12-2019 07:12:40 UTC : : SCAN_NAME name is racnode-scan 11-12-2019 07:12:40 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 11-12-2019 07:12:41 UTC : : 172.16.1.70 11-12-2019 07:12:41 UTC : : SCAN Name resolving to IP. Check Passed! 11-12-2019 07:12:41 UTC : : SCAN_IP name is 172.16.1.70 11-12-2019 07:12:41 UTC : : RAC Node PRIV IP is set to 192.168.17.150 11-12-2019 07:12:41 UTC : : RAC Node private hostname is set to racnode1-priv 11-12-2019 07:12:41 UTC : : CMAN_NAME set to the empty string 11-12-2019 07:12:41 UTC : : CMAN_IP set to the empty string 11-12-2019 07:12:41 UTC : : Cluster Name is not defined 11-12-2019 07:12:41 UTC : : Cluster name is set to 'racnode-c' 11-12-2019 07:12:41 UTC : : Password file generated 11-12-2019 07:12:41 UTC : : Common OS Password string is set for Grid user 11-12-2019 07:12:41 UTC : : Common OS Password string is set for Oracle user 11-12-2019 07:12:41 UTC : : Common OS Password string is set for Oracle Database 11-12-2019 07:12:41 UTC : : Setting CONFIGURE_GNS to false 11-12-2019 07:12:41 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 11-12-2019 07:12:41 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 11-12-2019 07:12:41 UTC : : IGNORE_CVU_CHECKS is set to true 11-12-2019 07:12:41 UTC : : Oracle SID is set to ORCLCDB 11-12-2019 07:12:41 UTC : : Oracle PDB name is set to ORCLPDB 11-12-2019 07:12:41 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 11-12-2019 07:12:41 UTC : : Public Netmask : 255.255.255.0 11-12-2019 07:12:41 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 11-12-2019 07:12:41 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 11-12-2019 07:12:41 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 11-12-2019 07:12:41 UTC : : Setting random password for grid user 11-12-2019 07:12:41 UTC : : Setting random password for oracle user 11-12-2019 07:12:41 UTC : : Calling setupSSH function 11-12-2019 07:12:41 UTC : : SSh will be setup among racnode1 nodes 11-12-2019 07:12:41 UTC : : Running SSH setup for grid user between nodes racnode1 11-12-2019 07:13:17 UTC : : Running SSH setup for oracle user between nodes racnode1 11-12-2019 07:13:23 UTC : : SSH check fine for the racnode1 11-12-2019 07:13:23 UTC : : SSH check fine for the oracle@racnode1 11-12-2019 07:13:23 UTC : : Preapring Device list 11-12-2019 07:13:23 UTC : : Changing Disk permission and ownership 11-12-2019 07:13:23 UTC : : ASM Disk size : 0 11-12-2019 07:13:23 UTC : : ASM Device list will be with failure groups /dev/asm_disk1, 11-12-2019 07:13:23 UTC : : ASM Device list will be groups /dev/asm_disk1 11-12-2019 07:13:23 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 11-12-2019 07:13:23 UTC : : Nodes in the cluster racnode1 11-12-2019 07:13:23 UTC : : Setting Device permissions for RAC Install on racnode1 11-12-2019 07:13:23 UTC : : Preapring ASM Device list 11-12-2019 07:13:23 UTC : : Changing Disk permission and ownership 11-12-2019 07:13:23 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 11-12-2019 07:13:24 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 11-12-2019 07:13:24 UTC : : Populate Rac Env Vars on Remote Hosts 11-12-2019 07:13:24 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 11-12-2019 07:13:24 UTC : : Generating Reponsefile 11-12-2019 07:13:24 UTC : : Running cluvfy Checks 11-12-2019 07:13:24 UTC : : Performing Cluvfy Checks 11-12-2019 07:13:45 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check. ERROR: PRVG-10467 : The default Oracle Inventory group could not be determined. Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...FAILED (PRVF-7573) Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...FAILED (PRVG-1201) Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...FAILED (PRVG-1205) Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: kmod-20-21 (x86_64) ...PASSED Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Host name ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...WARNING (PRVF-6006, PRKC-1182, PRVG-11078) Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1017) Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ...FAILED (PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...WARNING (PRVG-11368) Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /u01/app/grid ... Verifying '/u01/app/grid' ...PASSED Verifying Oracle base: /u01/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying RPM Package Manager database ...INFORMATION (PRVG-11250) Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying /dev/shm mounted as temporary file system ...PASSED Verifying File system mount options for path /var ...PASSED Verifying DefaultTasksMax parameter ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED Verifying Systemd login manager IPC parameter ...PASSED Verifying Access control attributes for cluster manifest file ...PASSED Pre-check for cluster services setup was unsuccessful on all the nodes. Failures were encountered during execution of CVU verification request "stage -pre crsinst". Verifying Swap Size ...FAILED racnode1: PRVF-7573 : Sufficient swap size is not available on node "racnode1" [Required = 16GB (1.6777216E7KB) ; Found = 8GB (8388604.0KB)] Verifying OS Kernel Parameter: shmall ...FAILED racnode1: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode1" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. Verifying OS Kernel Parameter: aio-max-nr ...FAILED racnode1: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode1" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. Verifying Node Connectivity ...WARNING PRVF-6006 : unable to reach the IP addresses "192.168.17.150" from the local node PRKC-1182 : failed to verify whether the nodes are accessible on the network and running PRVG-11078 : node connectivity failed for subnet "192.168.17.0" Verifying Network Time Protocol (NTP) ...FAILED racnode1: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode1" on which NTP daemon or service was not running Verifying resolv.conf Integrity ...FAILED racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers "127.0.0.11". Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve Verifying Single Client Access Name (SCAN) ...WARNING racnode1: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP addresses, but SCAN "racnode-scan" resolves to only "172.16.1.70" Verifying RPM Package Manager database ...INFORMATION PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges. CVU operation performed: stage -pre crsinst Date: Nov 12, 2019 7:13:26 AM CVU home: /u01/app/19.3.0/grid/ User: grid 11-12-2019 07:13:45 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 11-12-2019 07:13:45 UTC : : Running Grid Installation 11-12-2019 07:14:22 UTC : : Running root.sh 11-12-2019 07:14:22 UTC : : Nodes in the cluster racnode1 11-12-2019 07:14:22 UTC : : Running root.sh on racnode1 Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 07:24:14 UTC : : Running post root.sh steps 11-12-2019 07:24:14 UTC : : Running post root.sh steps to setup Grid env 11-12-2019 07:25:35 UTC : : Checking Cluster Status 11-12-2019 07:25:35 UTC : : Nodes in the cluster 11-12-2019 07:25:35 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 11-12-2019 07:25:35 UTC : : Running User Script for grid user 11-12-2019 07:25:35 UTC : : Generating DB Responsefile Running DB creation 11-12-2019 07:25:35 UTC : : Running DB creation 11-12-2019 07:53:46 UTC : : Checking DB status 11-12-2019 07:53:49 UTC : : ################################################################# 11-12-2019 07:53:49 UTC : : Oracle Database ORCLCDB is up and running on racnode1 11-12-2019 07:53:49 UTC : : ################################################################# 11-12-2019 07:53:49 UTC : : Running User Script oracle user 11-12-2019 07:53:49 UTC : : Setting Remote Listener 11-12-2019 07:53:49 UTC : : #################################### 11-12-2019 07:53:49 UTC : : ORACLE RAC DATABASE IS READY TO USE! 11-12-2019 07:53:49 UTC : : ####################################
注意最后3行,即表示已成功完成。
登录到容器内部:
docker exec -it racnode1 bash
确认GI和数据库均正常:
[grid@racnode1 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [grid@racnode1 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 51200 46924 0 46924 0 Y DATA/ $ ps -ef|grep tns grid 5871 3395 0 08:14 pts/1 00:00:00 grep --color=auto tns grid 19902 1 0 07:22 ? 00:00:01 /u01/app/19.3.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 20289 1 0 07:23 ? 00:00:00 /u01/app/19.3.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 22749 1 0 07:24 ? 00:00:00 /u01/app/19.3.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit $ export ORACLE_HOME=/u01/app/19.3.0/grid [grid@racnode1 ~]$ lsnrctl status LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-NOV-2019 08:33:35 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 12-NOV-2019 07:24:06 Uptime 0 days 1 hr. 9 min. 29 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/19.3.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/racnode1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.150)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.160)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=racnode1.example.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/19.3.0/dbhome_1/admin/ORCLCDB/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "+ASM_DATA" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "972286ba73d76975e053960110ac5131" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "ORCLCDB" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "ORCLCDBXDB" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... The command completed successfully [grid@racnode1 ~]$ sudo -s bash-4.2# su - oracle [oracle@racnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1/ [oracle@racnode1 ~]$ cd $ORACLE_HOME/network/admin [oracle@racnode1 admin]$ cat tnsnames.ora ORCLCDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCLCDB) ) ) [oracle@racnode1 admin]$ sqlplus sys/Oracle.123#@ORCLCDB as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 12 12:13:30 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- ORCLCDB1 SQL> select name from v$database; NAME --------- ORCLCDB SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0
在这一步,可以验证之前设置的操作系统口令是可以登录的(使用su - username)。
直接登录数据库(可是没找到):
docker exec -it racnode1 bash -c "source /home/oracle/.bashrc; sqlplus /nolog"
如果外部有instance client,也可以直接连接数据库,请参见帮助。
首先创建容器racnode2:
docker create -t -i \ --hostname racnode2 \ --volume /dev/shm \ --tmpfs /dev/shm:rw,exec,size=4G \ --volume /boot:/boot:ro \ --dns-search=example.com \ --volume /opt/containers/rac_host_file:/etc/hosts \ --volume /opt/.secrets:/run/secrets \ --device=/dev/sdb:/dev/asm_disk1 \ --privileged=false \ --cap-add=SYS_NICE \ --cap-add=SYS_RESOURCE \ --cap-add=NET_ADMIN \ -e EXISTING_CLS_NODES=racnode1 \ -e NODE_VIP=172.16.1.161 \ -e VIP_HOSTNAME=racnode2-vip \ -e PRIV_IP=192.168.17.151 \ -e PRIV_HOSTNAME=racnode2-priv \ -e PUBLIC_IP=172.16.1.151 \ -e PUBLIC_HOSTNAME=racnode2 \ -e DOMAIN=example.com \ -e SCAN_NAME=racnode-scan \ -e SCAN_IP=172.16.1.70 \ -e ASM_DISCOVERY_DIR=/dev \ -e ASM_DEVICE_LIST=/dev/asm_disk1 \ -e ORACLE_SID=ORCLCDB \ -e OP_TYPE=ADDNODE \ -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ -e PWD_KEY=pwd.key \ --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --cpu-rt-runtime=95000 \ --ulimit rtprio=99 \ --restart=always \ --name racnode2 \ oracle/database-rac:19.3.0
为第二个容器分配网络:
docker network disconnect bridge racnode2
docker network connect rac_pub1_nw --ip 172.16.1.151 racnode2
docker network connect rac_priv1_nw --ip 192.168.17.151 racnode2
启动第二个容器:
docker start racnode2
查看日志:
docker logs -f racnode2
以下是一次成功完成时的完整日志,耗时12分钟:
$ docker logs -f racnode2 PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=racnode2 TERM=xterm EXISTING_CLS_NODES=racnode1 NODE_VIP=172.16.1.161 VIP_HOSTNAME=racnode2-vip PRIV_IP=192.168.17.151 PRIV_HOSTNAME=racnode2-priv PUBLIC_IP=172.16.1.151 PUBLIC_HOSTNAME=racnode2 DOMAIN=example.com SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 ASM_DISCOVERY_DIR=/dev ASM_DEVICE_LIST=/dev/asm_disk1 ORACLE_SID=ORCLCDB OP_TYPE=ADDNODE COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip GRID_INSTALL_RSP=gridsetup_19c.rsp GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/u01/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2=LINUX.X64_193000_db_home.zip DB_INSTALL_RSP=db_sw_install_19c.rsp DBCA_RSP=dbca_19c.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh GRID_HOME_CLEANUP=GridHomeCleanup.sh ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh DB_USER=oracle GRID_USER=grid FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh RESET_FAILED_UNITS=resetFailedUnits.sh SET_CRONTAB=setCrontab.sh CRONTAB_ENTRY=crontabEntry EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directory systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64. Welcome to Oracle Linux Server 7.6! Set hostname to <racnode2>. Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice Root Slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice User and Session Slice. [ OK ] Listening on Journal Socket. [ OK ] Reached target RPC Port Mapper. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Reached target Swap. [ OK ] Created slice System Slice. Starting Journal Service... [ OK ] Created slice system-getty.slice. Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory Starting Configure read-only root support... Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Reached target Local File Systems (Pre). [ OK ] Reached target Slices. Starting Rebuild Hardware Database... [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. Starting Flush Journal to Persistent Storage... [ OK ] Started Configure read-only root support. [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Reached target Local File Systems. Starting Preprocess NFS configuration... Starting Rebuild Journal Catalog... Starting Mark the need to relabel after reboot... Starting Load/Save Random Seed... Starting Create Volatile Files and Directories... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting GSSAPI Proxy Daemon... Starting Resets System Activity Logs... Starting Login Service... Starting OpenSSH Server Key Generation... Starting LSB: Bring up/down networking... [ OK ] Started D-Bus System Message Bus. Starting Self Monitoring and Reporting Technology (SMART) Daemon... [ OK ] Started RPC bind service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Started Resets System Activity Logs. Starting Cleanup of Temporary Directories... [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Login Service. [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started Notify NFS peers of a restart. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started OpenSSH server daemon. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. 11-12-2019 09:26:49 UTC : : Process id of the program : 11-12-2019 09:26:49 UTC : : ################################################# 11-12-2019 09:26:49 UTC : : Starting Grid Installation 11-12-2019 09:26:49 UTC : : ################################################# 11-12-2019 09:26:49 UTC : : Pre-Grid Setup steps are in process 11-12-2019 09:26:49 UTC : : Process id of the program : 11-12-2019 09:26:49 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 09:26:49 UTC : : Resetting Failed Services 11-12-2019 09:26:49 UTC : : Sleeping for 60 seconds [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes. Oracle Linux Server 7.6 Kernel 4.14.35-1902.6.6.el7uek.x86_64 on an x86_64 racnode2 login: 11-12-2019 09:27:49 UTC : : Systemctl state is running! 11-12-2019 09:27:49 UTC : : Setting correct permissions for /bin/ping 11-12-2019 09:27:49 UTC : : Public IP is set to 172.16.1.151 11-12-2019 09:27:49 UTC : : RAC Node PUBLIC Hostname is set to racnode2 11-12-2019 09:27:49 UTC : : racnode2 already exists : 172.16.1.151 racnode2.example.com racnode2 192.168.17.151 racnode2-priv.example.com racnode2-priv 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 11-12-2019 09:27:49 UTC : : racnode2-priv already exists : 192.168.17.151 racnode2-priv.example.com racnode2-priv, no update required 11-12-2019 09:27:49 UTC : : racnode2-vip already exists : 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 11-12-2019 09:27:49 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 11-12-2019 09:27:49 UTC : : Preapring Device list 11-12-2019 09:27:49 UTC : : Changing Disk permission and ownership /dev/asm_disk1 11-12-2019 09:27:49 UTC : : DNS_SERVERS is set to empty. /etc/resolv.conf will use default dns docker embedded server. 11-12-2019 09:27:49 UTC : : ##################################################################### 11-12-2019 09:27:49 UTC : : RAC setup will begin in 2 minutes 11-12-2019 09:27:49 UTC : : #################################################################### 11-12-2019 09:27:51 UTC : : ################################################### 11-12-2019 09:27:51 UTC : : Pre-Grid Setup steps completed 11-12-2019 09:27:51 UTC : : ################################################### 11-12-2019 09:27:51 UTC : : Checking if grid is already configured 11-12-2019 09:27:51 UTC : : Public IP is set to 172.16.1.151 11-12-2019 09:27:51 UTC : : RAC Node PUBLIC Hostname is set to racnode2 11-12-2019 09:27:51 UTC : : Domain is defined to example.com 11-12-2019 09:27:51 UTC : : Setting Existing Cluster Node for node addition operation. This will be retrieved from racnode1 11-12-2019 09:27:51 UTC : : Existing Node Name of the cluster is set to racnode1 11-12-2019 09:27:51 UTC : : 172.16.1.150 11-12-2019 09:27:51 UTC : : Existing Cluster node resolved to IP. Check passed 11-12-2019 09:27:51 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 11-12-2019 09:27:51 UTC : : RAC VIP set to 172.16.1.161 11-12-2019 09:27:51 UTC : : RAC Node VIP hostname is set to racnode2-vip 11-12-2019 09:27:51 UTC : : SCAN_NAME name is racnode-scan 11-12-2019 09:27:51 UTC : : 172.16.1.70 11-12-2019 09:27:51 UTC : : SCAN Name resolving to IP. Check Passed! 11-12-2019 09:27:51 UTC : : SCAN_IP name is 172.16.1.70 11-12-2019 09:27:51 UTC : : RAC Node PRIV IP is set to 192.168.17.151 11-12-2019 09:27:51 UTC : : RAC Node private hostname is set to racnode2-priv 11-12-2019 09:27:51 UTC : : CMAN_NAME set to the empty string 11-12-2019 09:27:51 UTC : : CMAN_IP set to the empty string 11-12-2019 09:27:51 UTC : : Password file generated 11-12-2019 09:27:51 UTC : : Common OS Password string is set for Grid user 11-12-2019 09:27:51 UTC : : Common OS Password string is set for Oracle user 11-12-2019 09:27:51 UTC : : GRID_RESPONSE_FILE env variable set to empty. AddNode.sh will use standard cluster responsefile 11-12-2019 09:27:51 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 11-12-2019 09:27:51 UTC : : ORACLE_SID is set to ORCLCDB 11-12-2019 09:27:51 UTC : : Setting random password for root/grid/oracle user 11-12-2019 09:27:51 UTC : : Setting random password for grid user 11-12-2019 09:27:51 UTC : : Setting random password for oracle user 11-12-2019 09:27:51 UTC : : Setting random password for root user 11-12-2019 09:27:51 UTC : : Cluster Nodes are racnode1 racnode2 11-12-2019 09:27:51 UTC : : Running SSH setup for grid user between nodes racnode1 racnode2 11-12-2019 09:28:05 UTC : : Running SSH setup for oracle user between nodes racnode1 racnode2 11-12-2019 09:28:18 UTC : : SSH check fine for the racnode1 11-12-2019 09:28:18 UTC : : SSH check fine for the racnode2 11-12-2019 09:28:18 UTC : : SSH check fine for the racnode2 11-12-2019 09:28:18 UTC : : SSH check fine for the oracle@racnode1 11-12-2019 09:28:19 UTC : : SSH check fine for the oracle@racnode2 11-12-2019 09:28:19 UTC : : SSH check fine for the oracle@racnode2 11-12-2019 09:28:19 UTC : : Setting Device permission to grid and asmadmin on all the cluster nodes 11-12-2019 09:28:19 UTC : : Nodes in the cluster racnode2 11-12-2019 09:28:19 UTC : : Setting Device permissions for RAC Install on racnode2 11-12-2019 09:28:19 UTC : : Preapring ASM Device list 11-12-2019 09:28:19 UTC : : Changing Disk permission and ownership 11-12-2019 09:28:19 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 11-12-2019 09:28:19 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 11-12-2019 09:28:19 UTC : : Populate Rac Env Vars on Remote Hosts 11-12-2019 09:28:19 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2 11-12-2019 09:28:19 UTC : : Checking Cluster Status on racnode1 11-12-2019 09:28:19 UTC : : Checking Cluster 11-12-2019 09:28:20 UTC : : Cluster Check on remote node passed 11-12-2019 09:28:20 UTC : : Cluster Check went fine 11-12-2019 09:28:20 UTC : : CRSD Check went fine 11-12-2019 09:28:20 UTC : : CSSD Check went fine 11-12-2019 09:28:21 UTC : : EVMD Check went fine 11-12-2019 09:28:21 UTC : : Generating Responsefile for node addition 11-12-2019 09:28:21 UTC : : Clustered Nodes are set to racnode2:racnode2-vip:HUB 11-12-2019 09:28:21 UTC : : Running Cluster verification utility for new node racnode2 on racnode1 11-12-2019 09:28:21 UTC : : Nodes in the cluster racnode2 11-12-2019 09:28:21 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2 11-12-2019 09:29:25 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check. Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...FAILED (PRVF-7573) Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/u01/app/19.3.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/u01/app/19.3.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmoper ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: asmoper ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...FAILED (PRVG-1201) Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...FAILED (PRVG-1205) Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: kmod-20-21 (x86_64) ...PASSED Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/u01/app/19.3.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Host name ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...PASSED Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1017) Verifying User Not In Group "root": grid ...PASSED Verifying Time offset between nodes ...PASSED Verifying resolv.conf Integrity ...FAILED (PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /dev/shm mounted as temporary file system ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED Pre-check for node addition was unsuccessful on all the nodes. Failures were encountered during execution of CVU verification request "stage -pre nodeadd". Verifying Swap Size ...FAILED racnode2: PRVF-7573 : Sufficient swap size is not available on node "racnode2" [Required = 16GB (1.6777216E7KB) ; Found = 8GB (8388604.0KB)] racnode1: PRVF-7573 : Sufficient swap size is not available on node "racnode1" [Required = 16GB (1.6777216E7KB) ; Found = 8GB (8388604.0KB)] Verifying OS Kernel Parameter: shmall ...FAILED racnode2: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode2" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. racnode1: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode1" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. Verifying OS Kernel Parameter: aio-max-nr ...FAILED racnode2: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode2" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. racnode1: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode1" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. Verifying Network Time Protocol (NTP) ...FAILED racnode2: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode2,racnode1" on which NTP daemon or service was not running racnode1: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode2,racnode1" on which NTP daemon or service was not running Verifying resolv.conf Integrity ...FAILED racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers "127.0.0.11". racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers "127.0.0.11". CVU operation performed: stage -pre nodeadd Date: Nov 12, 2019 9:28:24 AM CVU home: /u01/app/19.3.0/grid/ User: grid 11-12-2019 09:29:25 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 11-12-2019 09:29:25 UTC : : Running Node Addition and cluvfy test for node racnode2 11-12-2019 09:29:25 UTC : : Copying /tmp/grid_addnode.rsp on remote node racnode1 11-12-2019 09:29:25 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster 11-12-2019 09:30:32 UTC : : Node Addition performed. removing Responsefile 11-12-2019 09:30:32 UTC : : Running root.sh on node racnode2 11-12-2019 09:30:32 UTC : : Nodes in the cluster racnode2 Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 09:35:01 UTC : : Checking Cluster 11-12-2019 09:35:01 UTC : : Cluster Check passed 11-12-2019 09:35:01 UTC : : Cluster Check went fine 11-12-2019 09:35:02 UTC : : CRSD Check went fine 11-12-2019 09:35:02 UTC : : CSSD Check went fine 11-12-2019 09:35:02 UTC : : EVMD Check went fine 11-12-2019 09:35:02 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 11-12-2019 09:35:02 UTC : : Checking Cluster Class 11-12-2019 09:35:02 UTC : : Checking Cluster Class 11-12-2019 09:35:02 UTC : : Cluster class is CRS-41008: Cluster class is 'Standalone Cluster' 11-12-2019 09:35:02 UTC : : Running User Script for grid user 11-12-2019 09:35:02 UTC : : Performing DB Node addition 11-12-2019 09:35:47 UTC : : Node Addition went fine for racnode2 11-12-2019 09:35:47 UTC : : Running root.sh 11-12-2019 09:35:47 UTC : : Nodes in the cluster racnode2 11-12-2019 09:35:47 UTC : : Adding DB Instance 11-12-2019 09:35:47 UTC : : Adding DB Instance on racnode1 11-12-2019 09:38:43 UTC : : Checking DB status 11-12-2019 09:38:46 UTC : : ################################################################# 11-12-2019 09:38:46 UTC : : Oracle Database ORCLCDB is up and running on racnode2 11-12-2019 09:38:46 UTC : : ################################################################# 11-12-2019 09:38:46 UTC : : Running User Script for oracle user 11-12-2019 09:38:46 UTC : : Setting Remote Listener 11-12-2019 09:38:46 UTC : : #################################### 11-12-2019 09:38:46 UTC : : ORACLE RAC DATABASE IS READY TO USE! 11-12-2019 09:38:46 UTC : : ####################################
最后3行是成功的标志。
登录到第二个节点进行检查:
$ docker exec -i -t racnode2 /bin/bash
确认:
[grid@racnode2 ~]$ crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [grid@racnode2 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 51200 45840 0 45840 0 Y DATA/ [grid@racnode2 ~]$ ps -ef|grep tns grid 13087 1 0 09:34 ? 00:00:00 /u01/app/19.3.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 14388 1 0 09:34 ? 00:00:00 /u01/app/19.3.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 21235 20043 0 09:43 pts/1 00:00:00 grep --color=auto tns [grid@racnode2 ~]$ export ORACLE_HOME=/u01/app/19.3.0/grid [grid@racnode2 ~]$ lsnrctl status LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-NOV-2019 09:44:02 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 12-NOV-2019 09:34:54 Uptime 0 days 0 hr. 9 min. 7 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/19.3.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/racnode2/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.151)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.161)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=racnode2.example.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/19.3.0/dbhome_1/admin/ORCLCDB/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service... Service "+ASM_DATA" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service... Service "972286ba73d76975e053960110ac5131" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "ORCLCDB" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "ORCLCDBXDB" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... The command completed successfully [grid@racnode2 ~]$ sudo -s bash-4.2# su - oracle Last login: Tue Nov 12 09:42:38 UTC 2019 on pts/1 [oracle@racnode2 ~]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1/ [oracle@racnode2 ~]$ lsnrctl status LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-NOV-2019 09:45:44 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 12-NOV-2019 09:34:54 Uptime 0 days 0 hr. 10 min. 49 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/19.3.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/racnode2/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.151)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.161)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=racnode2.example.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/19.3.0/dbhome_1/admin/ORCLCDB/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service... Service "+ASM_DATA" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service... Service "972286ba73d76975e053960110ac5131" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "ORCLCDB" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "ORCLCDBXDB" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "ORCLCDB2", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@racnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 12 09:57:43 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. ERROR: ORA-12547: TNS:lost contact Enter user-name:
原来是使用的服务名不对,以下为在节点1:
[oracle@racnode1 admin]$ cat $ORACLE_HOME/network/admin/tnsnames.ora # tnsnames.ora.racnode1 Network Configuration File: /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora.racnode1 # Generated by Oracle configuration tools. ORCLCDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCLCDB) ) ) [oracle@racnode1 admin]$ sqlplus sys/Oracle.123#@ORCLCDB as sysdba -- 或者 sqlplus system@"racnode-scan:1521/ORCLCDB" SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 12 10:17:34 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> select name from v$database; NAME --------- ORCLCDB SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- ORCLCDB2
在节点2:
[oracle@racnode2 ~]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 [oracle@racnode2 ~]$ sqlplus sys/Oracle.123#@ORCLCDB as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 12 10:21:35 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> select name from v$database; NAME --------- ORCLCDB SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- ORCLCDB2
成功了!!
此时的空间状态:
[opc@instance-20191112-1400-docker ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 15G 0 15G 0% /dev
tmpfs 15G 0 15G 0% /dev/shm
tmpfs 15G 8.8M 15G 1% /run
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda3 192G 42G 151G 22% /
/dev/sda1 200M 9.7M 191M 5% /boot/efi
tmpfs 3.0G 0 3.0G 0% /run/user/1000
[opc@instance-20191112-1400-docker ~]$ docker inspect racnode1 [ { "Id": "fd8b3c79e88fd1df9ca733b390f4c71b8dda3d7c6a3fa9dfd1da005dac2fb986", "Created": "2019-11-12T07:10:25.420466428Z", "Path": "/usr/sbin/oracleinit", "Args": [], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 19653, "ExitCode": 0, "Error": "", "StartedAt": "2019-11-12T07:11:37.817984842Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:29e8018e9d71ab4d0ab518210f56831d2981a4e12f04f5294a54def3e96b8dd0", "ResolvConfPath": "/var/lib/docker/containers/fd8b3c79e88fd1df9ca733b390f4c71b8dda3d7c6a3fa9dfd1da005dac2fb986/resolv.conf", "HostnamePath": "/var/lib/docker/containers/fd8b3c79e88fd1df9ca733b390f4c71b8dda3d7c6a3fa9dfd1da005dac2fb986/hostname", "HostsPath": "/opt/containers/rac_host_file", "LogPath": "/var/lib/docker/containers/fd8b3c79e88fd1df9ca733b390f4c71b8dda3d7c6a3fa9dfd1da005dac2fb986/fd8b3c79e88fd1df9ca733b390f4c71b8dda3d7c6a3fa9dfd1da005dac2fb986-json.log", "Name": "/racnode1", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c132,c700", "ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c132,c700", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/boot:/boot:ro", "/opt/containers/rac_host_file:/etc/hosts", "/opt/.secrets:/run/secrets", "/sys/fs/cgroup:/sys/fs/cgroup:ro" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": {}, "RestartPolicy": { "Name": "always", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": [ "SYS_NICE", "SYS_RESOURCE", "NET_ADMIN" ], "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [ "example.com" ], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "Tmpfs": { "/dev/shm": "rw,exec,size=4G", "/run": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 95000, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/sdb", "PathInContainer": "/dev/asm_disk1", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": [ { "Name": "rtprio", "Hard": 99, "Soft": 99 } ], "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/eff2b9cc996bc8a64a38d72dc4929f29b3fd77d17d197026f75602a72e65633c-init/diff:/var/lib/docker/overlay2/3cadcaff0b00f54de27f068f84dbbf1187a1bd5b4dbbfee70d78d604644c0bcb/diff:/var/lib/docker/overlay2/8a5505943b109de7302d9593414731eac847caaf15455632c8303366a619a040/diff:/var/lib/docker/overlay2/5ca8ce83cc27adb7e5e6a9036ae5ba02137ce56ad08d74623c92b44a6124764a/diff:/var/lib/docker/overlay2/1403297cb21e20d7cc8bf1d37eab49697dfc514c55e78436b96d273aa299d549/diff", "MergedDir": "/var/lib/docker/overlay2/eff2b9cc996bc8a64a38d72dc4929f29b3fd77d17d197026f75602a72e65633c/merged", "UpperDir": "/var/lib/docker/overlay2/eff2b9cc996bc8a64a38d72dc4929f29b3fd77d17d197026f75602a72e65633c/diff", "WorkDir": "/var/lib/docker/overlay2/eff2b9cc996bc8a64a38d72dc4929f29b3fd77d17d197026f75602a72e65633c/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "volume", "Name": "b659641df5be5e32a54f130c11bae91c88e01cccef4d4cac5bc060a19c8774bc", "Source": "/var/lib/docker/volumes/b659641df5be5e32a54f130c11bae91c88e01cccef4d4cac5bc060a19c8774bc/_data", "Destination": "/common_scripts", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" }, { "Type": "bind", "Source": "/boot", "Destination": "/boot", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/opt/containers/rac_host_file", "Destination": "/etc/hosts", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/opt/.secrets", "Destination": "/run/secrets", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/sys/fs/cgroup", "Destination": "/sys/fs/cgroup", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "9594f0044881dd8343ae8fbf6a283be7d4199470ac886b7bfc8d1bfb31556fea", "Source": "/var/lib/docker/volumes/9594f0044881dd8343ae8fbf6a283be7d4199470ac886b7bfc8d1bfb31556fea/_data", "Destination": "/dev/shm", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "racnode1", "Domainname": "", "User": "grid", "AttachStdin": true, "AttachStdout": true, "AttachStderr": true, "Tty": true, "OpenStdin": true, "StdinOnce": true, "Env": [ "NODE_VIP=172.16.1.160", "VIP_HOSTNAME=racnode1-vip", "PRIV_IP=192.168.17.150", "PRIV_HOSTNAME=racnode1-priv", "PUBLIC_IP=172.16.1.150", "PUBLIC_HOSTNAME=racnode1", "SCAN_NAME=racnode-scan", "SCAN_IP=172.16.1.70", "OP_TYPE=INSTALL", "DOMAIN=example.com", "ASM_DEVICE_LIST=/dev/asm_disk1", "ASM_DISCOVERY_DIR=/dev", "COMMON_OS_PWD_FILE=common_os_pwdfile.enc", "PWD_KEY=pwd.key", "PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SETUP_LINUX_FILE=setupLinuxEnv.sh", "INSTALL_DIR=/opt/scripts", "GRID_BASE=/u01/app/grid", "GRID_HOME=/u01/app/19.3.0/grid", "INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip", "GRID_INSTALL_RSP=gridsetup_19c.rsp", "GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp", "GRID_SETUP_FILE=setupGrid.sh", "FIXUP_PREQ_FILE=fixupPreq.sh", "INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh", "INSTALL_GRID_PATCH=applyGridPatch.sh", "INVENTORY=/u01/app/oraInventory", "CONFIGGRID=configGrid.sh", "ADDNODE=AddNode.sh", "DELNODE=DelNode.sh", "ADDNODE_RSP=grid_addnode.rsp", "SETUPSSH=setupSSH.expect", "DOCKERORACLEINIT=dockeroracleinit", "GRID_USER_HOME=/home/grid", "SETUPGRIDENV=setupGridEnv.sh", "RESET_OS_PASSWORD=resetOSPassword.sh", "MULTI_NODE_INSTALL=MultiNodeInstall.py", "DB_BASE=/u01/app/oracle", "DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1", "INSTALL_FILE_2=LINUX.X64_193000_db_home.zip", "DB_INSTALL_RSP=db_sw_install_19c.rsp", "DBCA_RSP=dbca_19c.rsp", "DB_SETUP_FILE=setupDB.sh", "PWD_FILE=setPassword.sh", "RUN_FILE=runOracle.sh", "STOP_FILE=stopOracle.sh", "ENABLE_RAC_FILE=enableRAC.sh", "CHECK_DB_FILE=checkDBStatus.sh", "USER_SCRIPTS_FILE=runUserScripts.sh", "REMOTE_LISTENER_FILE=remoteListener.sh", "INSTALL_DB_BINARIES_FILE=installDBBinaries.sh", "GRID_HOME_CLEANUP=GridHomeCleanup.sh", "ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh", "DB_USER=oracle", "GRID_USER=grid", "FUNCTIONS=functions.sh", "COMMON_SCRIPTS=/common_scripts", "CHECK_SPACE_FILE=checkSpace.sh", "RESET_FAILED_UNITS=resetFailedUnits.sh", "SET_CRONTAB=setCrontab.sh", "CRONTAB_ENTRY=crontabEntry", "EXPECT=/usr/bin/expect", "BIN=/usr/sbin", "container=true", "INSTALL_SCRIPTS=/opt/scripts/install", "SCRIPT_DIR=/opt/scripts/startup", "GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib", "DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib" ], "Cmd": [ "/usr/sbin/oracleinit" ], "ArgsEscaped": true, "Image": "oracle/database-rac:19.3.0", "Volumes": { "/common_scripts": {}, "/dev/shm": {} }, "WorkingDir": "/home/grid", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "84d3f1824e04291e8a5c2933ac3cdf0b83146888aa6b22960820b41aa4d3a760", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/84d3f1824e04", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "rac_priv1_nw": { "IPAMConfig": { "IPv4Address": "192.168.17.150" }, "Links": null, "Aliases": [ "fd8b3c79e88f" ], "NetworkID": "ec2f9e2a70d759a8baee2686332663664639348ba39d87f6aa0ca691511c6f53", "EndpointID": "6efdbdc820718ac1279e4e7961d73395b3956c99e98b5b49b5eb4614606a048e", "Gateway": "192.168.17.1", "IPAddress": "192.168.17.150", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:11:96", "DriverOpts": null }, "rac_pub1_nw": { "IPAMConfig": { "IPv4Address": "172.16.1.150" }, "Links": null, "Aliases": [ "fd8b3c79e88f" ], "NetworkID": "4fcaaab01e3e6fa8ae386d379162e2d8c0a686a24216900ca35d6ec1277164d4", "EndpointID": "1e0eecd196e65f1245d6dcca2fb02240296db3a381544c4caff7b81a8bc71bde", "Gateway": "172.16.1.1", "IPAddress": "172.16.1.150", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:10:01:96", "DriverOpts": null } } } } ] [opc@instance-20191112-1400-docker ~]$ docker inspect racnode2 [ { "Id": "63cbba0a481b95e8056eea90e2808ebef31c7b732574616883444961189afab3", "Created": "2019-11-12T09:25:14.987405867Z", "Path": "/usr/sbin/oracleinit", "Args": [], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 28864, "ExitCode": 0, "Error": "", "StartedAt": "2019-11-12T09:26:48.019423062Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:29e8018e9d71ab4d0ab518210f56831d2981a4e12f04f5294a54def3e96b8dd0", "ResolvConfPath": "/var/lib/docker/containers/63cbba0a481b95e8056eea90e2808ebef31c7b732574616883444961189afab3/resolv.conf", "HostnamePath": "/var/lib/docker/containers/63cbba0a481b95e8056eea90e2808ebef31c7b732574616883444961189afab3/hostname", "HostsPath": "/opt/containers/rac_host_file", "LogPath": "/var/lib/docker/containers/63cbba0a481b95e8056eea90e2808ebef31c7b732574616883444961189afab3/63cbba0a481b95e8056eea90e2808ebef31c7b732574616883444961189afab3-json.log", "Name": "/racnode2", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c60,c94", "ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c60,c94", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/boot:/boot:ro", "/opt/containers/rac_host_file:/etc/hosts", "/opt/.secrets:/run/secrets", "/sys/fs/cgroup:/sys/fs/cgroup:ro" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "default", "PortBindings": {}, "RestartPolicy": { "Name": "always", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": [ "SYS_NICE", "SYS_RESOURCE", "NET_ADMIN" ], "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [ "example.com" ], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "Tmpfs": { "/dev/shm": "rw,exec,size=4G", "/run": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 95000, "CpusetCpus": "", "CpusetMems": "", "Devices": [ { "PathOnHost": "/dev/sdb", "PathInContainer": "/dev/asm_disk1", "CgroupPermissions": "rwm" } ], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": [ { "Name": "rtprio", "Hard": 99, "Soft": 99 } ], "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/161d1b5da41fcadb84bc97e7940898772c526dbe8a42f40a2ad15ade062dc0f2-init/diff:/var/lib/docker/overlay2/3cadcaff0b00f54de27f068f84dbbf1187a1bd5b4dbbfee70d78d604644c0bcb/diff:/var/lib/docker/overlay2/8a5505943b109de7302d9593414731eac847caaf15455632c8303366a619a040/diff:/var/lib/docker/overlay2/5ca8ce83cc27adb7e5e6a9036ae5ba02137ce56ad08d74623c92b44a6124764a/diff:/var/lib/docker/overlay2/1403297cb21e20d7cc8bf1d37eab49697dfc514c55e78436b96d273aa299d549/diff", "MergedDir": "/var/lib/docker/overlay2/161d1b5da41fcadb84bc97e7940898772c526dbe8a42f40a2ad15ade062dc0f2/merged", "UpperDir": "/var/lib/docker/overlay2/161d1b5da41fcadb84bc97e7940898772c526dbe8a42f40a2ad15ade062dc0f2/diff", "WorkDir": "/var/lib/docker/overlay2/161d1b5da41fcadb84bc97e7940898772c526dbe8a42f40a2ad15ade062dc0f2/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/boot", "Destination": "/boot", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/opt/containers/rac_host_file", "Destination": "/etc/hosts", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/opt/.secrets", "Destination": "/run/secrets", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/sys/fs/cgroup", "Destination": "/sys/fs/cgroup", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "ae4128be2de4bf380513d8c8069186e547d14dd6c25d5bfa639729ab422b9990", "Source": "/var/lib/docker/volumes/ae4128be2de4bf380513d8c8069186e547d14dd6c25d5bfa639729ab422b9990/_data", "Destination": "/dev/shm", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "2e387a73aa4b27a4969eef54814fe3f985ca53870337bcdf5675c57e785ae895", "Source": "/var/lib/docker/volumes/2e387a73aa4b27a4969eef54814fe3f985ca53870337bcdf5675c57e785ae895/_data", "Destination": "/common_scripts", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "racnode2", "Domainname": "", "User": "grid", "AttachStdin": true, "AttachStdout": true, "AttachStderr": true, "Tty": true, "OpenStdin": true, "StdinOnce": true, "Env": [ "EXISTING_CLS_NODES=racnode1", "NODE_VIP=172.16.1.161", "VIP_HOSTNAME=racnode2-vip", "PRIV_IP=192.168.17.151", "PRIV_HOSTNAME=racnode2-priv", "PUBLIC_IP=172.16.1.151", "PUBLIC_HOSTNAME=racnode2", "DOMAIN=example.com", "SCAN_NAME=racnode-scan", "SCAN_IP=172.16.1.70", "ASM_DISCOVERY_DIR=/dev", "ASM_DEVICE_LIST=/dev/asm_disk1", "ORACLE_SID=ORCLCDB", "OP_TYPE=ADDNODE", "COMMON_OS_PWD_FILE=common_os_pwdfile.enc", "PWD_KEY=pwd.key", "PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SETUP_LINUX_FILE=setupLinuxEnv.sh", "INSTALL_DIR=/opt/scripts", "GRID_BASE=/u01/app/grid", "GRID_HOME=/u01/app/19.3.0/grid", "INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip", "GRID_INSTALL_RSP=gridsetup_19c.rsp", "GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp", "GRID_SETUP_FILE=setupGrid.sh", "FIXUP_PREQ_FILE=fixupPreq.sh", "INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh", "INSTALL_GRID_PATCH=applyGridPatch.sh", "INVENTORY=/u01/app/oraInventory", "CONFIGGRID=configGrid.sh", "ADDNODE=AddNode.sh", "DELNODE=DelNode.sh", "ADDNODE_RSP=grid_addnode.rsp", "SETUPSSH=setupSSH.expect", "DOCKERORACLEINIT=dockeroracleinit", "GRID_USER_HOME=/home/grid", "SETUPGRIDENV=setupGridEnv.sh", "RESET_OS_PASSWORD=resetOSPassword.sh", "MULTI_NODE_INSTALL=MultiNodeInstall.py", "DB_BASE=/u01/app/oracle", "DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1", "INSTALL_FILE_2=LINUX.X64_193000_db_home.zip", "DB_INSTALL_RSP=db_sw_install_19c.rsp", "DBCA_RSP=dbca_19c.rsp", "DB_SETUP_FILE=setupDB.sh", "PWD_FILE=setPassword.sh", "RUN_FILE=runOracle.sh", "STOP_FILE=stopOracle.sh", "ENABLE_RAC_FILE=enableRAC.sh", "CHECK_DB_FILE=checkDBStatus.sh", "USER_SCRIPTS_FILE=runUserScripts.sh", "REMOTE_LISTENER_FILE=remoteListener.sh", "INSTALL_DB_BINARIES_FILE=installDBBinaries.sh", "GRID_HOME_CLEANUP=GridHomeCleanup.sh", "ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh", "DB_USER=oracle", "GRID_USER=grid", "FUNCTIONS=functions.sh", "COMMON_SCRIPTS=/common_scripts", "CHECK_SPACE_FILE=checkSpace.sh", "RESET_FAILED_UNITS=resetFailedUnits.sh", "SET_CRONTAB=setCrontab.sh", "CRONTAB_ENTRY=crontabEntry", "EXPECT=/usr/bin/expect", "BIN=/usr/sbin", "container=true", "INSTALL_SCRIPTS=/opt/scripts/install", "SCRIPT_DIR=/opt/scripts/startup", "GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib", "DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib" ], "Cmd": [ "/usr/sbin/oracleinit" ], "ArgsEscaped": true, "Image": "oracle/database-rac:19.3.0", "Volumes": { "/common_scripts": {}, "/dev/shm": {} }, "WorkingDir": "/home/grid", "Entrypoint": null, "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "fb65200a54fab6400e633e8cb3489cec64514eed5a3a074a5c5ed036e7e8663c", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/fb65200a54fa", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "rac_priv1_nw": { "IPAMConfig": { "IPv4Address": "192.168.17.151" }, "Links": null, "Aliases": [ "63cbba0a481b" ], "NetworkID": "ec2f9e2a70d759a8baee2686332663664639348ba39d87f6aa0ca691511c6f53", "EndpointID": "c87550092018ebbeb491bc33c293af107213c850b2c0211f8e71e3a0c37478e5", "Gateway": "192.168.17.1", "IPAddress": "192.168.17.151", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:11:97", "DriverOpts": null }, "rac_pub1_nw": { "IPAMConfig": { "IPv4Address": "172.16.1.151" }, "Links": null, "Aliases": [ "63cbba0a481b" ], "NetworkID": "4fcaaab01e3e6fa8ae386d379162e2d8c0a686a24216900ca35d6ec1277164d4", "EndpointID": "fd814660b8bfd30601ec1150b6e7be5ea836fdc160cd989a2533930ee0887f8b", "Gateway": "172.16.1.1", "IPAddress": "172.16.1.151", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:10:01:97", "DriverOpts": null } } } } ]
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。