Pacemaker 1.1
从头开始搭建集群
Step-by-Step Instructions for Building Your First High-Availability Cluster
版 9
Andrew Beekhof
作者
Red Hat
Raoul Scarazzini
意大利语翻译 rasca@miamammausalinux.org
Dan Frîncu
罗马尼亚语翻译 df.cluster@gmail.com
法律通告
Copyright © 2009-2009-2016 Andrew Beekhof.
The text of and illustrations in this document are licensed under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA")[1].
In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
In addition to the requirements of this license, the following activities are looked upon favorably:
-
If you are distributing Open Publication works on hardcopy or CD-ROM, you provide email notification to the authors of your intent to redistribute at least thirty days before your manuscript or media freeze, to give the authors time to provide updated documents. This notification should describe modifications, if any, made to the document.
-
All substantive modifications (including deletions) be either clearly marked up in the document or else described in an attachment to the document.
-
Finally, while it is not mandatory under this license, it is considered good form to offer a free copy of any hardcopy or CD-ROM expression of the author(s) work.
摘要
本文档的主要目是提供一站式指南,教您如何使用Pacemaker创建一个主/备模式的集群并把它转换到主/主模式。
示例集群会使用以下软件:
-
CentOS &DISTRO_VERSION; 作为基本操作系统
-
Corosync作为通信层并提供关系管理服务,
-
Pacemaker来实现资源管理,
-
DRBD 作为一个经济的共享存储方案,
-
GFS2 作为集群文件系统(主/主模式中)
Given the graphical nature of the install process, a number of screenshots are included. However the guide is primarily composed of commands, the reasons for executing them and their expected outputs.
目录
2.1.5. Configure Time Synchronization
2.4. Configure Communication Between Nodes
2.4.1. Configure Host Name Resolution
2.6. Configure the Cluster Software
2.6.1. Allow cluster services through firewall
3.1. Simplify administration using a cluster shell
5. Create an Active/Passive Cluster
5.1. Explore the Existing Configuration
6. Add Apache HTTP Server as a Cluster Service
6.5. Ensure Resources Run on the Same Host
6.6. Ensure Resources Start and Stop in Order
6.7. Prefer One Node Over Another
7. Replicate Storage Using DRBD
7.2. Allocate a Disk Volume for DRBD
7.6. Configure the Cluster for the DRBD device
7.7. Configure the Cluster for the Filesystem
8.3. Configure the Cluster for STONITH
9. Convert Cluster to Active/Active
9.1. Install Cluster Filesystem Software
9.2. Configure the Cluster for the DLM
9.3. Create and Populate GFS2 Filesystem
9.6. Clone the Filesystem and Apache Resources
B. Sample Corosync Configuration
插图清单
1.1. Pacemaker 层次
1.2. 内部组件
1.3. Active/Passive 冗余
1.4. Shared Failover
1.5. N to N 冗余
2.1. CentOS 7.1 Installation Welcome Screen
2.2. CentOS 7.1 Installation Summary Screen
2.3. CentOS 7.1 Console Prompt
范例清单
前言
目录
1. 文档约定
本手册使用几个约定来突出某些用词和短语以及信息的某些片段。
在 PDF 版本以及纸版中,本手册使用在 Liberation 字体套件中选出的字体。如果您在您的系统中安装了 Liberation 字体套件,它还可用于 HTML 版本。如果没有安装,则会显示可替换的类似字体。请注意:红帽企业 Linux 5 以及其后的版本默认包含 Liberation 字体套件。
1.1. 排版约定
我们使用四种排版约定突出特定用词和短语。这些约定及其使用环境如下。
单行粗体
用来突出系统输入,其中包括 shell 命令、文件名以及路径。还可用来突出按键以及组合键。例如:
要看到文件您当前工作目录中文件
my_next_bestselling_novel
的内容,请在 shell 提示符后输入cat my_next_bestselling_novel
命令并按 Enter 键执行该命令。
以上内容包括一个文件名,一个 shell 命令以及一个按键,它们都以固定粗体形式出现,且全部与上下文有所区别。
按键组合与单独按键之间的区别是按键组合是使用加号将各个按键连在一起。例如:
按 Enter 执行该命令。
按 Ctrl+Alt+F2 切换到虚拟终端。
第一个示例突出的是要按的特定按键。第二个示例突出了按键组合:一组要同时按下的三个按键。
如果讨论的是源码、等级名称、方法、功能、变量名称以及在段落中提到的返回的数值,那么都会以上述形式出现,即固定粗体
。例如:
与文件相关的等级包括用于文件系统的
filesystem
、用于文件的file
以及用于目录的dir
。每个等级都有其自身相关的权限。
比例粗体
这是指在系统中遇到的文字或者短语,其中包括应用程序名称、对话框文本、标记的按钮、复选框以及单选按钮标签、菜单标题以及子菜单标题。例如:
Choose System → Preferences → Mouse from the main menu bar to launch Mouse Preferences. In theButtons tab, select the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
要在 gedit 文件中插入特殊字符,请在主菜单栏中选择「应用程序」 → 「附件」 → 「字符映射表」。接下来选择从 Character Map 菜单中选择Search → 「查找......」,在「搜索」字段输入字符名称并点击「下一个」按钮。此时会在「字符映射表」中突出您搜索的字符。双击突出的字符将其放在「要复制的文本」字段中,然后点击「复制」按钮。现在返回您的文档,并选择 gedit 菜单中的「编辑」 → 「粘贴」。
以上文本包括应用程序名称、系统范围菜单名称及项目、应用程序特定菜单名称以及按钮和 GUI 界面中的文本,所有都以比例粗体出现并与上下文区别。
或者 固定粗斜体
比例粗斜体
无论固定粗体或者比例粗体,附加的斜体表示是可替换或者变量文本。斜体表示那些不直接输入的文本或者那些根据环境改变的文本。例如:
要使用 ssh 连接到远程机器,请在 shell 提示符后输入
ssh
。如果远程机器是username
@domain.name
example.com
且您在该其机器中的用户名为 john,请输入ssh john@example.com
。
mount -o remount
命令会重新挂载命名的文件系统。例如:要重新挂载file-system
/home
文件系统,则命令为mount -o remount /home
。要查看目前安装的软件包版本,请使用
rpm -q
命令。它会返回以下结果:package
。
package-version-release
请注意上述使用黑斜体的文字 -- username、domain.name、file-system、package、version 和 release。每个字都是一个站位符,可用作您执行命令时输入的文本,也可作为该系统显示的文本。
不考虑工作中显示标题的标准用法,斜体表示第一次使用某个新且重要的用语。例如:
Publican 是一个 DocBook 发布系统。
1.2. 抬升式引用约定
终端输出和源代码列表要与周围文本明显分开。
将发送到终端的输出设定为 Mono-spaced Roman
并显示为:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
源码列表也设为 Mono-spaced Roman
,但添加下面突出的语法:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
1.3. 备注及警告
最后,我们使用三种视觉形式来突出那些可能被忽视的信息。
注意
备注是对手头任务的提示、捷径或者备选的解决方法。忽略提示不会造成负面后果,但您可能会错过一个更省事的诀窍。
重要
重要框中的内容是那些容易错过的事情:配置更改只可用于当前会话,或者在应用更新前要重启的服务。忽略‘重要’框中的内容不会造成数据丢失但可能会让您抓狂。
警告
警告是不应被忽略的。忽略警告信息很可能导致数据丢失。
2. We Need Feedback!
If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla[2] against the product Pacemaker.
When submitting a bug report, be sure to mention the manual's identifier: Clusters_from_Scratch
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
[2] http://bugs.clusterlabs.org
第 1 章 Read-Me-First
目录
1.1. 本文范围
Computer clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types.
This document will walk through the installation and setup of simple clusters using the CentOS distribution, version 7.1.
The clusters described here will use Pacemaker and Corosync to provide resource management and messaging. Required packages and modifications to their configuration files are described along with the use of the Pacemaker command line tool for generating the XML used for cluster control.
Pacemaker is a central component and provides the resource management required in these systems. This management includes detecting and recovering from the failure of various nodes, resources and services under its control.
When more in depth information is required and for real world usage, please refer to the Pacemaker Explained manual.
1.2. What Is Pacemaker?
Pacemaker is a cluster resource manager, that is, a logic responsible for a life-cycle of deployed software — indirectly perhaps even whole systems or their interconnections — under its control within a set of computers (a.k.a. nodes) and driven by prescribed rules.
It achieves maximum availability for your cluster services (a.k.a. resources) by detecting and recovering from node- and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat), and possibly by utilizing other parts of the overall cluster stack.
注意
For the goal of minimal downtime a term high availability was coined and together with its acronym, HA, is well-established in the sector. To differentiate this sort of clusters from high performance computing (HPC) ones, should a context require it (apparently, not the case in this document), using HA cluster is an option.
Pacemaker’s key features include:
-
监测并恢复节点和服务级别的故障
-
存储无关,并不需要共享存储
-
资源无关,任何能用脚本控制的资源都可以作为服务
-
Supports fencing (also referred to as the STONITH acronym, deciphered later on) for ensuring data integrity
-
支持大型或者小型的集群
-
Supports both quorate and resource-driven clusters
-
Supports practically any redundancy configuration
-
自动同步各个节点的配置文件
-
可以设定集群范围内的ordering, colocation and anti-colocation
-
Support for advanced service types
-
Clones:为那些要在多个节点运行的服务所准备的
-
Multi-state: for services with multiple modes (e.g. master/slave, primary/secondary)
-
-
Unified, scriptable cluster management tools
1.3. Pacemaker 架构
在最高一个层次,集群由三个部分组成:
-
Non-cluster-aware components. These pieces include the resources themselves; scripts that start, stop and monitor them; and a local daemon that masks the differences between the different standards these scripts implement. Even though interactions of these resources when run as multiple instances can resemble a distributed system, they still lack the proper HA mechanisms and/or autonomous cluster-wide governance as subsumed in the following item.
-
Resource management. Pacemaker provides the brain that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance and scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches.
-
Low-level infrastructure. Projects like Corosync, CMAN and Heartbeat provide reliable messaging, membership and quorum information about the cluster.
When combined with Corosync, Pacemaker also supports popular open source cluster filesystems.[3]
Due to past standardization within the cluster filesystem community, cluster filesystems make use of a commondistributed lock manager, which makes use of Corosync for its messaging and membership capabilities (which nodes are up/down) and Pacemaker for fencing services.
图 1.1. Pacemaker 层次
1.3.1. 内部组件
Pacemaker itself is composed of five key components:
-
Cluster Information Base (CIB)
-
Cluster Resource Management daemon (CRMd)
-
Local Resource Management daemon (LRMd)
-
Policy Engine (PEngine or PE)
-
Fencing daemon (STONITHd)
图 1.2. 内部组件
The CIB uses XML to represent both the cluster’s configuration and current state of all resources in the cluster. The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved.
This list of instructions is then fed to the Designated Controller (DC). Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master. Should the elected CRMd process (or the node it is on) fail, a new one is quickly established.
The DC carries out the PEngine’s instructions in the required order by passing them to either the Local Resource Management daemon (LRMd) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process).
The peer nodes all report the results of their operations back to the DC and, based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results.
In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this, Pacemaker comes with STONITHd.
注意
STONITH is an acronym for Shoot-The-Other-Node-In-The-Head, a recommended practice that misbehaving node is best to be promptly fenced (shut off, cut from shared resources or otherwise immobilized), and is usually implemented with a remote power switch.
In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced, and it does the rest.
1.4. Pacemaker 集群的种类
Pacemaker makes no assumptions about your environment. This allows it to support practically any redundancy configuration including Active/Active, Active/Passive, N+1, N+M, N-to-1 and N-to-N.
图 1.3. Active/Passive 冗余
Two-node Active/Passive clusters using Pacemaker and DRBD are a cost-effective solution for many High Availability situations.
图 1.4. Shared Failover
By supporting many nodes, Pacemaker can dramatically reduce hardware costs by allowing several active/passive clusters to be combined and share a common backup node.
图 1.5. N to N 冗余
When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload.
[3] Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership, and Corosync seems to be what they’re standardizing on. Technically, it would be possible for them to support Heartbeat as well, but there seems little interest in this.
第 2 章 安装
目录
2.1.5. Configure Time Synchronization
2.4. Configure Communication Between Nodes
2.4.1. Configure Host Name Resolution
2.6. Configure the Cluster Software
2.6.1. Allow cluster services through firewall
2.1. Install CentOS 7.1
2.1.1. Boot the Install Image
Download the 4GB CentOS 7.1 DVD ISO. Use the image to boot a virtual machine, or burn it to a DVD or USB drive and boot a physical server from that.
After starting the installation, select your language and keyboard layout at the welcome screen.
图 2.1. CentOS 7.1 Installation Welcome Screen
2.1.2. Installation Options
At this point, you get a chance to tweak the default installation options.
图 2.2. CentOS 7.1 Installation Summary Screen
Ignore the SOFTWARE SELECTION section (try saying that 10 times quickly). The Infrastructure Server environment does have add-ons with much of the software we need, but we will leave it as a Minimal Install here, so that we can see exactly what software is required later.
2.1.3. Configure Network
In the NETWORK & HOSTNAME section:
-
Edit Host Name: as desired. For this example, we will use pcmk-1.localdomain.
-
Select your network device, press Configure…, and manually assign a fixed IP address. For this example, we’ll use 192.168.122.101 under IPv4 Settings (with an appropriate netmask, gateway and DNS server).
-
Flip the switch to turn your network device on.
重要
Do not accept the default network settings. Cluster machines should never obtain an IP address via DHCP, because DHCP’s periodic address renewal will interfere with corosync.
2.1.4. Configure Disk
By default, the installer’s automatic partitioning will use LVM (which allows us to dynamically change the amount of space allocated to a given partition). However, it allocates all free space to the /
(aka. root) partition, which cannot be reduced in size later (dynamic increases are fine).
In order to follow the DRBD and GFS2 portions of this guide, we need to reserve space on each machine for a replicated volume.
Enter the INSTALLATION DESTINATION section, ensure the hard drive you want to install to is selected, select I will configure partitioning, and press Done.
In the MANUAL PARTITIONING screen that comes next, click the option to create mountpoints automatically. Select the /
mountpoint, and reduce the desired capacity by 1GiB or so. Select Modify… by the volume group name, and change theSize policy: to As large as possible, to make the reclaimed space available inside the LVM volume group. We’ll add the additional volume later.
2.1.5. Configure Time Synchronization
It is highly recommended to enable NTP on your cluster nodes. Doing so ensures all nodes agree on the current time and makes reading log files significantly easier.
CentOS will enable NTP automatically. If you want to change any time-related settings (such as time zone or NTP server), you can do this in the TIME & DATE section.
2.1.6. Finish Install
Select Begin Installation. Once it completes, set a root password, and reboot as instructed. For the purposes of this document, it is not necessary to create any additional users. After the node reboots, you’ll see a login prompt on the console. Login using root and the password you created earlier.
图 2.3. CentOS 7.1 Console Prompt
注意
From here on, we’re going to be working exclusively from the terminal.
2.2. Configure the OS
2.2.1. Verify Networking
Ensure that the machine has the static IP address you configured earlier.
[root@pcmk-1 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:d7:d6:08 brd ff:ff:ff:ff:ff:ff inet 192.168.122.101/24 brd 192.168.122.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fed7:d608/64 scope link valid_lft forever preferred_lft forever
注意
If you ever need to change the node’s IP address from the command line, follow
[root@pcmk-1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-${device} # manually edit as desired [root@pcmk-1 ~]# nmcli dev disconnect ${device} [root@pcmk-1 ~]# nmcli con reload ${device} [root@pcmk-1 ~]# nmcli con up ${device}
This makes NetworkManager aware that a change was made on the config file.
Next, ensure that the routes are as expected:
[root@pcmk-1 ~]# ip route default via 192.168.122.1 dev eth0 proto static metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101 metric 100
If there is no line beginning with default via, then you may need to add a line such as
GATEWAY="192.168.122.1"
to the device configuration using the same process as described above for changing the IP address.
Now, check for connectivity to the outside world. Start small by testing whether we can reach the gateway we configured.
[root@pcmk-1 ~]# ping -c 1 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_req=1 ttl=64 time=0.249 ms --- 192.168.122.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms
Now try something external; choose a location you know should be available.
[root@pcmk-1 ~]# ping -c 1 www.google.com PING www.l.google.com (173.194.72.106) 56(84) bytes of data. 64 bytes from tf-in-f106.1e100.net (173.194.72.106): icmp_req=1 ttl=41 time=167 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 167.618/167.618/167.618/0.000 ms
2.2.2. Login Remotely
The console isn’t a very friendly place to work from, so we will now switch to accessing the machine remotely via SSH where we can use copy and paste, etc.
From another host, check whether we can see the new host at all:
beekhof@f16 ~ # ping -c 1 192.168.122.101 PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data. 64 bytes from 192.168.122.101: icmp_req=1 ttl=64 time=1.01 ms --- 192.168.122.101 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.012/1.012/1.012/0.000 ms
Next, login as root via SSH.
beekhof@f16 ~ # ssh -l root 192.168.122.101 The authenticity of host '192.168.122.101 (192.168.122.101)' can't be established. ECDSA key fingerprint is 6e:b7:8f:e2:4c:94:43:54:a8:53:cc:20:0f:29:a4:e0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts. root@192.168.122.101's password: Last login: Tue Aug 11 13:14:39 2015 [root@pcmk-1 ~]#
2.2.3. Apply Updates
Apply any package updates released since your installation image was created:
[root@pcmk-1 ~]# yum update
2.2.4. Use Short Node Names
During installation, we filled in the machine’s fully qualified domain name (FQDN), which can be rather long when it appears in cluster logs and status output. See for yourself how the machine identifies itself:
[root@pcmk-1 ~]# uname -n pcmk-1.localdomain
We can use the hostnamectl
tool to strip off the domain name:
[root@pcmk-1 ~]# hostnamectl set-hostname $(uname -n | sed s/\\..*//)
Now, check that the machine is using the correct name:
[root@pcmk-1 ~]# uname -n pcmk-1
2.3. Repeat for Second Node
Repeat the Installation steps so far, so that you have two nodes ready to have the cluster software installed.
在这篇文档中, 另外一个节点叫 pcmk-2 并且IP地址为 192.168.122.102。
2.4. Configure Communication Between Nodes
2.4.1. Configure Host Name Resolution
确认这两个新节点能够通讯:
[root@pcmk-1 ~]# ping -c 3 192.168.122.102 PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data. 64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.343 ms 64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.402 ms 64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.558 ms --- 192.168.122.102 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.343/0.434/0.558/0.092 ms
Now we need to make sure we can communicate with the machines by their name. If you have a DNS server, add additional entries for the two machines. Otherwise, you’ll need to add the machines to /etc/hosts
on both nodes. Below are the entries for my cluster nodes:
[root@pcmk-1 ~]# grep pcmk /etc/hosts 192.168.122.101 pcmk-1.clusterlabs.org pcmk-1 192.168.122.102 pcmk-2.clusterlabs.org pcmk-2
现在让我们ping一下:
[root@pcmk-1 ~]# ping -c 3 pcmk-2 PING pcmk-2.clusterlabs.org (192.168.122.101) 56(84) bytes of data. 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=1 ttl=64 time=0.164 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=2 ttl=64 time=0.475 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=3 ttl=64 time=0.186 ms --- pcmk-2.clusterlabs.org ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.164/0.275/0.475/0.141 ms
2.4.2. 配置SSH
SSH 是一个方便又安全的用来远程传输文件或运行命令的工具. 在这个文档中, 我们创建ssh key(用 -N 选项)来免去登入要输入密码的麻烦。
警告
Unprotected SSH keys (those without a password) are not recommended for servers exposed to the outside world. We use them here only to simplify the demo.
创建一个密钥并允许所有有这个密钥的用户登入
创建并激活一个新的SSH密钥
[root@pcmk-1 ~]# ssh-keygen -t dsa -f ~/.ssh/id_dsa -N "" Generating public/private dsa key pair. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org The key's randomart image is: +--[ DSA 1024]----+ |==.ooEo.. | |X O + .o o | | * A + | | + . | | . S | | | | | | | | | +-----------------+ [root@pcmk-1 ~]# cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys
Install the key on the other node:
[root@pcmk-1 ~]# scp -r ~/.ssh pcmk-2: The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established. ECDSA key fingerprint is a4:f5:b2:34:9d:86:2b:34:a2:87:37:b9:ca:68:52:ec. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'pcmk-2,192.168.122.102' (ECDSA) to the list of known hosts. root@pcmk-2's password: id_dsa.pub 100% 616 0.6KB/s 00:00 id_dsa 100% 672 0.7KB/s 00:00 known_hosts 100% 400 0.4KB/s 00:00 authorized_keys 100% 616 0.6KB/s 00:00
Test that you can now run commands remotely, without being prompted:
[root@pcmk-1 ~]# ssh pcmk-2 -- uname -n pcmk-2
2.5. 安装集群软件
Fire up a shell on both nodes and run the following to install pacemaker, and while we’re at it, some command-line tools to make our lives easier:
# yum install -y pacemaker pcs psmisc policycoreutils-python
重要
This document will show commands that need to be executed on both nodes with a simple #
prompt. Be sure to run them on each node individually.
注意
This document uses pcs
for cluster management. Other alternatives, such as crmsh
, are available, but their syntax will differ from the examples used here.
2.6. Configure the Cluster Software
2.6.1. Allow cluster services through firewall
On each node, allow cluster-related services through the local firewall:
# firewall-cmd --permanent --add-service=high-availability success # firewall-cmd --reload success
注意
If you are using iptables directly, or some other firewall solution besides firewalld, simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux entirely until you have everything working. This may create significant security issues and should not be performed on machines that will be exposed to the outside world, but may be appropriate during development and testing on a protected host.
To disable security measures:
[root@pcmk-1 ~]# setenforce 0 [root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config [root@pcmk-1 ~]# systemctl disable firewalld.service [root@pcmk-1 ~]# systemctl stop firewalld.service [root@pcmk-1 ~]# iptables --flush
2.6.2. Enable pcs Daemon
Before the cluster can be configured, the pcs daemon must be started and enabled to start at boot time on each node. This daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.
Start and enable the daemon by issuing the following commands on each node:
# systemctl start pcsd.service # systemctl enable pcsd.service ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'
The installed packages will create a hacluster user with a disabled password. While this is fine for running pcs
commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes.
This tutorial will make use of such commands, so now we will set a password for the hacluster user, using the same password on both nodes:
# passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
注意
Alternatively, to script this process or set the password on a different machine from the one you’re logged into, you can use the --stdin
option for passwd
:
[root@pcmk-1 ~]# ssh pcmk-2 -- 'echo redhat1 | passwd --stdin hacluster'
2.6.3. Configure Corosync
On either node, use pcs cluster auth
to authenticate as the hacluster user:
[root@pcmk-1 ~]# pcs cluster auth pcmk-1 pcmk-2 Username: hacluster Password: pcmk-1: Authorized pcmk-2: Authorized
Next, use pcs cluster setup
on the same node to generate and synchronize the corosync configuration:
[root@pcmk-1 ~]# pcs cluster setup --name mycluster pcmk-1 pcmk-2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... pcmk-1: Succeeded pcmk-2: Succeeded
If you received an authorization error for either of those commands, make sure you configured the hacluster user account on each node with the same password.
注意
Early versions of pcs required that --name
be omitted from the above command.
If you are not using pcs
for cluster administration, follow whatever procedures are appropriate for your tools to create a corosync.conf and copy it to all nodes.
The pcs
command will configure corosync to use UDP unicast transport; if you choose to use multicast instead, choose a multicast address carefully. [4]
The final /etc/corosync.conf configuration on each node should look something like the sample in 附录 B, Sample Corosync Configuration.
[4] For some subtle issues, see the now-defuncthttp://web.archive.org/web/20101211210054/http://29west.com/docs/THPM/multicast-address-assignment.html or the more detailed treatment in Cisco’s Guidelines for Enterprise IP Multicast Address Allocation paper.
第 3 章 Pacemaker Tools
目录
3.1. Simplify administration using a cluster shell
3.1. Simplify administration using a cluster shell
在万恶的旧社会,配置Pacemaker需要管理员具备读写XML的能力。 根据UNIX精神,也有许多不同的查询和配置集群的命令。
All of that has been greatly simplified with the creation of unified command-line shells (and GUIs) that hide all the messy XML scaffolding.
These shells take all the individual aspects required for managing and configuring a cluster, and pack them into one simple-to-use command line tool.
They even allow you to queue up several changes at once and commit them atomically.
Two popular command-line shells are pcs
and crmsh
. This edition of Clusters from Scratch is based on pcs
.
注意
The two shells share many concepts but the scope, layout and syntax does differ, so make sure you read the version of this guide that corresponds to the software installed on your system.
3.2. Explore pcs
Start by taking some time to familiarize yourself with what pcs
can do.
[root@pcmk-1 ~]# pcs Usage: pcs [-f file] [-h] [commands]... Control and configure pacemaker and corosync. Options: -h, --help Display usage and exit -f file Perform actions on file instead of active CIB --debug Print all network traffic and external commands run --version Print pcs version information Commands: cluster Configure cluster options and nodes resource Manage cluster resources stonith Configure fence devices constraint Set resource constraints property Set pacemaker properties acl Set pacemaker access control lists status View cluster status config View and manage cluster configuration
As you can see, the different aspects of cluster management are separated into categories: resource, cluster, stonith, property, constraint, and status. To discover the functionality available in each of these categories, one can issue the command pcs
. Below is an example of all the options available under the status category.category
help
[root@pcmk-1 ~]# pcs status help Usage: pcs status [commands]... View current cluster and resource status Commands: [status] [--full] View all information about the cluster and resources (--full provides more details) resources View current status of cluster resources groups View currently configured groups and their resources cluster View current cluster status corosync View current membership information as seen by corosync nodes [corosync|both|config] View current status of nodes from pacemaker. If 'corosync' is specified, print nodes currently configured in corosync, if 'both' is specified, print nodes from both corosync & pacemaker. If 'config' is specified, print nodes from corosync & pacemaker configuration. pcsd <node> ... Show the current status of pcsd on the specified nodes xml View xml version of status (output from crm_mon -r -1 -X)
Additionally, if you are interested in the version and supported cluster stack(s) available with your Pacemaker installation, run:
[root@pcmk-1 ~]# pacemakerd --features Pacemaker 1.1.12 (Build: a14efad) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc upstart systemd nagios corosync-native atomic-attrd acls
注意
If the SNMP and/or email options are not listed, then Pacemaker was not built to support them. This may be by the choice of your distribution, or the required libraries may not have been available. Please contact whoever supplied you with the packages for more details.
第 4 章 Start and Verify Cluster
目录
4.1. Start the Cluster
Now that corosync is configured, it is time to start the cluster. The command below will start corosync and pacemaker on both nodes in the cluster. If you are issuing the start command from a different node than the one you ran the pcs cluster auth
command on earlier, you must authenticate on the current node you are logged into before you will be allowed to start the cluster.
[root@pcmk-1 ~]# pcs cluster start --all pcmk-1: Starting Cluster... pcmk-2: Starting Cluster...
注意
An alternative to using the pcs cluster start --all
command is to issue either of the below command sequences on each node in the cluster separately:
# pcs cluster start Starting Cluster...
or
# systemctl start corosync.service # systemctl start pacemaker.service
重要
In this example, we are not enabling the corosync and pacemaker services to start at boot. If a cluster node fails or is rebooted, you will need to run pcs cluster start
(or nodename
--all
) to start the cluster on it. While you could enable the services to start at boot, requiring a manual start of cluster services gives you the opportunity to do a post-mortem investigation of a node failure before returning it to the cluster.
4.2. 检验Corosync的安装
First, use corosync-cfgtool
to check whether cluster communication is happy:
[root@pcmk-1 ~]# corosync-cfgtool -s Printing ring status. Local node ID 1 RING ID 0 id = 192.168.122.101 status = ring 0 active with no faults
We can see here that everything appears normal with our fixed IP address (not a 127.0.0.x loopback address) listed as the id, and no faults for the status.
If you see something different, you might want to start by checking the node’s network, firewall and selinux configurations.
Next, check the membership and quorum APIs:
[root@pcmk-1 ~]# corosync-cmapctl | grep members runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0 runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.122.101) runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1 runtime.totem.pg.mrp.srp.members.1.status (str) = joined runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0 runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.122.102) runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 2 runtime.totem.pg.mrp.srp.members.2.status (str) = joined [root@pcmk-1 ~]# pcs status corosync Membership information -------------------------- Nodeid Votes Name 1 1 pcmk-1 (local) 2 1 pcmk-2
You should see both nodes have joined the cluster.
4.3. 检查Pacemaker的安装
Now that we have confirmed that Corosync is functional, we can check the rest of the stack. Pacemaker has already been started, so verify the necessary processes are running:
[root@pcmk-1 ~]# ps axf PID TTY STAT TIME COMMAND 2 ? S 0:00 [kthreadd] ...lots of processes... 1362 ? Ssl 0:35 corosync 1379 ? Ss 0:00 /usr/sbin/pacemakerd -f 1380 ? Ss 0:00 \_ /usr/libexec/pacemaker/cib 1381 ? Ss 0:00 \_ /usr/libexec/pacemaker/stonithd 1382 ? Ss 0:00 \_ /usr/libexec/pacemaker/lrmd 1383 ? Ss 0:00 \_ /usr/libexec/pacemaker/attrd 1384 ? Ss 0:00 \_ /usr/libexec/pacemaker/pengine 1385 ? Ss 0:00 \_ /usr/libexec/pacemaker/crmd
If that looks OK, check the pcs status
output:
[root@pcmk-1 ~]# pcs status Cluster name: mycluster WARNING: no stonith devices and stonith-enabled is not false Last updated: Tue Dec 16 16:15:29 2014 Last change: Tue Dec 16 15:49:47 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 0 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Finally, ensure there are no startup errors (aside from messages relating to not having STONITH configured, which are OK at this point):
[root@pcmk-1 ~]# journalctl | grep -i error
注意
Other operating systems may report startup errors in other locations, for example /var/log/messages
.
Repeat these checks on the other node. The results should be the same.
第 5 章 Create an Active/Passive Cluster
目录
5.1. Explore the Existing Configuration
5.1. Explore the Existing Configuration
When Pacemaker starts up, it automatically records the number and details of the nodes in the cluster, as well as which stack is being used and the version of Pacemaker being used.
The first few lines of output should look like this:
[root@pcmk-1 ~]# pcs status Cluster name: mycluster WARNING: no stonith devices and stonith-enabled is not false Last updated: Tue Dec 16 16:15:29 2014 Last change: Tue Dec 16 15:49:47 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 0 Resources configured Online: [ pcmk-1 pcmk-2 ]
For those who are not of afraid of XML, you can see the raw cluster configuration and status by using the pcs cluster cib
command.
例 5.1. 这是本文档最后一次显示XML。(作者怨念很深啊)
[root@pcmk-1 ~]# pcs cluster cib
<cib crm_feature_set="3.0.9" validate-with="pacemaker-2.3" epoch="5" num_updates="8" admin_epoch="0" cib-last-written="Tue Dec 16 15:49:47 2014" have-quorum="1" dc-uuid="2"> <configuration> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.12-a14efad"/> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/> <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="mycluster"/> </cluster_property_set> </crm_config> <nodes> <node id="1" uname="pcmk-1"/> <node id="2" uname="pcmk-2"/> </nodes> <resources/> <constraints/> </configuration> <status> <node_state id="2" uname="pcmk-2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member"> <lrm id="2"> <lrm_resources/> </lrm> <transient_attributes id="2"> <instance_attributes id="status-2"> <nvpair id="status-2-shutdown" name="shutdown" value="0"/> <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/> </instance_attributes> </transient_attributes> </node_state> <node_state id="1" uname="pcmk-1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member"> <lrm id="1"> <lrm_resources/> </lrm> <transient_attributes id="1"> <instance_attributes id="status-1"> <nvpair id="status-1-shutdown" name="shutdown" value="0"/> <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/> </instance_attributes> </transient_attributes> </node_state> </status> </cib>
Before we make any changes, it’s a good idea to check the validity of the configuration.
[root@pcmk-1 ~]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid
就像你看到的,这个工具发现了一些错误。
In order to guarantee the safety of your data, [5] the default for STONITH [6] in Pacemaker is enabled. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster would not be able to make progress if a situation requiring node fencing arose).
We will disable this feature for now and configure it later.
To disable STONITH, set the stonith-enabled cluster option to false:
[root@pcmk-1 ~]# pcs property set stonith-enabled=false [root@pcmk-1 ~]# crm_verify -L
设置完这个选项以后,校验配置文件就正常了。
警告
The use of stonith-enabled=false
is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will refuse to support clusters that have STONITH disabled.
We disable STONITH here only to defer the discussion of its configuration, which can differ widely from one installation to the next. See 第 8.1 节 “What is STONITH?” for information on why STONITH is important and details on how to configure it.
5.2. Add a Resource
Our first resource will be a unique IP address that the cluster can bring up on either node. Regardless of where any cluster service(s) are running, end users need a consistent address to contact them on. Here, I will choose 192.168.122.120 as the floating address, give it the imaginative name ClusterIP and tell the cluster to check whether it is running every 30 seconds.
警告
The chosen address must not already be in use on the network. Do not reuse an IP address one of the nodes already has configured.
[root@pcmk-1 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \ ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s
Another important piece of information here is ocf:heartbeat:IPaddr2. This tells Pacemaker three things about the resource you want to add:
-
The first field (ocf in this case) is the standard to which the resource script conforms and where to find it.
-
The second field (heartbeat in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in.
-
The third field (IPaddr2 in this case) is the name of the resource script.
To obtain a list of the available resource standards (the ocf part of ocf:heartbeat:IPaddr2), run:
[root@pcmk-1 ~]# pcs resource standards ocf lsb service systemd stonith
To obtain a list of the available OCF resource providers (the heartbeat part of ocf:heartbeat:IPaddr2), run:
[root@pcmk-1 ~]# pcs resource providers heartbeat openstack pacemaker
Finally, if you want to see all the resource agents available for a specific OCF provider (the IPaddr2 part ofocf:heartbeat:IPaddr2), run:
[root@pcmk-1 ~]# pcs resource agents ocf:heartbeat CTDB Delay Dummy Filesystem IPaddr IPaddr2 . . (skipping lots of resources to save space) . rsyncd slapd symlink tomcat
Now, verify that the IP resource has been added, and display the cluster’s status to see that it is now active:
[root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Tue Dec 16 17:44:40 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
5.3. 做一次故障迁移
Since our ultimate goal is high availability, we should test failover of our new resource before moving on.
首先,找到IP资源现在在哪个节点上运行。
[root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Tue Dec 16 17:44:40 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1
You can see that the status of the ClusterIP resource is Started on a particular node (in this example, pcmk-1). Shut down Pacemaker and Corosync on that machine to trigger a failover.
[root@pcmk-1 ~]# pcs cluster stop pcmk-1 Stopping Cluster...
注意
A cluster command such as pcs cluster stop
can be run from any node in the cluster, not just the affected node.nodename
Verify that pacemaker and corosync are no longer running:
[root@pcmk-1 ~]# pcs status Error: cluster is not currently running on this node
Go to the other node, and check the cluster status.
[root@pcmk-2 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 10:30:56 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 1 Resources configured Online: [ pcmk-2 ] OFFLINE: [ pcmk-1 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Notice that pcmk-1 is OFFLINE for cluster purposes (its PCSD is still Online, allowing it to receive pcs
commands, but it is not participating in the cluster).
Also notice that ClusterIP is now running on pcmk-2 — failover happened automatically, and no errors are reported.
Quorum
If a cluster splits into two (or more) groups of nodes that can no longer communicate with each other (aka.partitions), quorum is used to prevent resources from starting on more nodes than desired, which would risk data corruption.
A cluster has quorum when more than half of all known nodes are online in the same partition, or for the mathematically inclined, whenever the following equation is true:
total_nodes < 2 * active_nodes
For example, if a 5-node cluster split into 3- and 2-node paritions, the 3-node partition would have quorum and could continue serving resources. If a 6-node cluster split into two 3-node partitions, neither partition would have quorum; pacemaker’s default behavior in such cases is to stop all resources, in order to prevent data corruption.
Two-node clusters are a special case. By the above definition, a two-node cluster would only have quorum when both nodes are running. This would make the creation of a two-node cluster pointless, [7] but corosync has the ability to treat two-node clusters as if only one node is required for quorum.
The pcs cluster setup
command will automatically configure two_node: 1 in corosync.conf
, so a two-node cluster will "just work".
If you are using a different cluster shell, you will have to configure corosync.conf
appropriately yourself. If you are using older versions of corosync, you will have to ignore quorum at the pacemaker level, using pcs property set no-quorum-policy=ignore
(or the equivalent command if you are using a different cluster shell).
Now, simulate node recovery by restarting the cluster stack on pcmk-1, and check the cluster’s status. (It may take a little while before the cluster gets going on the node, but it eventually will look like the below.)
[root@pcmk-1 ~]# pcs cluster start pcmk-1 pcmk-1: Starting Cluster... [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 10:50:11 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
注意
With older versions of pacemaker, the cluster might move the IP back to its original location (pcmk-1). Usually, this is no longer the case.
5.4. 防止资源在节点恢复后迁移
In most circumstances, it is highly desirable to prevent healthy resources from being moved around the cluster. Moving resources almost always requires a period of downtime. For complex services such as databases, this period can be quite long.
To address this, Pacemaker has the concept of resource stickiness, which controls how strongly a service prefers to stay running where it is. You may like to think of it as the "cost" of any downtime. By default, Pacemaker assumes there is zero cost associated with moving resources and will do so to achieve "optimal" [8] resource placement. We can specify a different stickiness for every resource, but it is often sufficient to change the default.
[root@pcmk-1 ~]# pcs resource defaults resource-stickiness=100 [root@pcmk-1 ~]# pcs resource defaults resource-stickiness: 100
注意
Older versions of pcs
required that rsc
be added after resource
in the above commands.
[5] If the data is corrupt, there is little point in continuing to make it available
[6] A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes
[7] Some would argue that two-node clusters are always pointless, but that is an argument for another time
[8] Pacemaker’s definition of optimal may not always agree with that of a human’s. The order in which Pacemaker processes lists of resources and nodes creates implicit preferences in situations where the administrator has not explicitly specified them.
第 6 章 Add Apache HTTP Server as a Cluster Service
目录
6.5. Ensure Resources Run on the Same Host
6.6. Ensure Resources Start and Stop in Order
6.7. Prefer One Node Over Another
Now that we have a basic but functional active/passive two-node cluster, we’re ready to add some real services. We’re going to start with Apache HTTP Server because it is a feature of many clusters and relatively simple to configure.
6.1. Install Apache
在继续前,我们需要确保两个节点上都安装了Apache。同样的,为了检测Apache服务器,我们要安装wget这个工具。
# yum install -y httpd wget # firewall-cmd --permanent --add-service=http # firewall-cmd --reload
重要
Do not enable the httpd service. Services that are intended to be managed via the cluster software should never be managed by the OS.
It is often useful, however, to manually start the service, verify that it works, then stop it again, before adding it to the cluster. This allows you to resolve any non-cluster-related problems before continuing. Since this is a simple example, we’ll skip that step here.
6.2. Create Website Documents
We need to create a page for Apache to serve. On CentOS 7.1, the default Apache document root is /var/www/html, so we’ll create an index file there. For the moment, we will simplify things by serving a static site and manually synchronizing the data between the two nodes, so run this command on both nodes:
# cat <<-END >/var/www/html/index.html <html> <body>My Test Site - $(hostname)</body> </html> END
6.3. 开启 Apache status URL
In order to monitor the health of your Apache instance, and recover it if it fails, the resource agent used by Pacemaker assumes the server-status URL is available. On both nodes, enable the URL with:
# cat <<-END >/etc/httpd/conf.d/status.conf <Location /server-status> SetHandler server-status Require local </Location> END
注意
If you are using a different operating system, server-status may already be enabled or may be configurable in a different location. If you are using a version of Apache HTTP Server less than 2.4, the syntax will be different.
6.4. Configure the Cluster
At this point, Apache is ready to go, and all that needs to be done is to add it to the cluster. Let’s call the resource WebSite. We need to use an OCF resource script called apache in the heartbeat namespace. [9] The script’s only required parameter is the path to the main Apache configuration file, and we’ll tell the cluster to check once a minute that Apache is still running.
[root@pcmk-1 ~]# pcs resource create WebSite ocf:heartbeat:apache \ configfile=/etc/httpd/conf/httpd.conf \ statusurl="http://localhost/server-status" \ op monitor interval=1min
By default, the operation timeout for all resources' start, stop, and monitor operations is 20 seconds. In many cases, this timeout period is less than a particular resource’s advised timeout period. For the purposes of this tutorial, we will adjust the global operation timeout default to 240 seconds.
[root@pcmk-1 ~]# pcs resource op defaults timeout=240s [root@pcmk-1 ~]# pcs resource op defaults timeout: 240s
注意
In a production cluster, it is usually better to adjust each resource’s start, stop, and monitor timeouts to values that are appropriate to the behavior observed in your environment, rather than adjust the global default.
After a short delay, we should see the cluster start Apache.
[root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 12:40:41 2014 Last change: Wed Dec 17 12:40:05 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 2 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
等等!WebSite这个资源跟IP地址没有跑在同一个节点上面!
注意
If, in the pcs status
output, you see the WebSite resource has failed to start, then you’ve likely not enabled the status URL correctly. You can check whether this is the problem by running:
wget -O - http://localhost/server-status
If you see Not Found or Forbidden in the output, then this is likely the problem. Ensure that the <Location /server-status> block is correct.
6.5. Ensure Resources Run on the Same Host
To reduce the load on any one machine, Pacemaker will generally try to spread the configured resources across the cluster nodes. However, we can tell the cluster that two resources are related and need to run on the same host (or not at all). Here, we instruct the cluster that WebSite can only run on the host that ClusterIP is active on.
To achieve this, we use a colocation constraint that indicates it is mandatory for WebSite to run on the same node as ClusterIP. The "mandatory" part of the colocation constraint is indicated by using a score of INFINITY. The INFINITY score also means that if ClusterIP is not active anywhere, WebSite will not be permitted to run.
注意
如果ClusterIP在任何节点都不存在,那么WebSite也不能运行。
重要
Colocation constraints are "directional", in that they imply certain things about the order in which the two resources will have a location chosen. In this case, we’re saying that WebSite needs to be placed on the same machine as ClusterIP, which implies that the cluster must know the location of ClusterIP before choosing a location for WebSite.
[root@pcmk-1 ~]# pcs constraint colocation add WebSite with ClusterIP INFINITY [root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: Colocation Constraints: WebSite with ClusterIP (score:INFINITY) [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 13:57:58 2014 Last change: Wed Dec 17 13:57:22 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 2 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
6.6. Ensure Resources Start and Stop in Order
Like many services, Apache can be configured to bind to specific IP addresses on a host or to the wildcard IP address. If Apache binds to the wildcard, it doesn’t matter whether an IP address is added before or after Apache starts; Apache will respond on that IP just the same. However, if Apache binds only to certain IP address(es), the order matters: If the address is added after Apache starts, Apache won’t respond on that address.
To be sure our WebSite responds regardless of Apache’s address configuration, we need to make sure ClusterIP not only runs on the same node, but starts before WebSite. A colocation constraint only ensures the resources run together, not the order in which they are started and stopped.
We do this by adding an ordering constraint. By default, all order constraints are mandatory, which means that the recovery of ClusterIP will also trigger the recovery of WebSite.
[root@pcmk-1 ~]# pcs constraint order ClusterIP then WebSite Adding ClusterIP WebSite (kind: Mandatory) (Options: first-action=start then-action=start) [root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY)
6.7. Prefer One Node Over Another
Pacemaker does not rely on any sort of hardware symmetry between nodes, so it may well be that one machine is more powerful than the other. In such cases, it makes sense to host the resources on the more powerful node if it is available. To do this, we create a location constraint.
In the location constraint below, we are saying the WebSite resource prefers the node pcmk-1 with a score of 50. Here, the score indicates how badly we’d like the resource to run at this location.
[root@pcmk-1 ~]# pcs constraint location WebSite prefers pcmk-1=50 [root@pcmk-1 ~]# pcs constraint Location Constraints: Resource: WebSite Enabled on: pcmk-1 (score:50) Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 14:11:49 2014 Last change: Wed Dec 17 14:11:20 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 2 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
等等,资源还是在pcmk-2上面跑的!
Even though WebSite now prefers to run on pcmk-1, that preference is (intentionally) less than the resource stickiness (how much we preferred not to have unnecessary downtime).
To see the current placement scores, you can use a tool called crm_simulate.
[root@pcmk-1 ~]# crm_simulate -sL Current cluster status: Online: [ pcmk-1 pcmk-2 ] ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Allocation scores: native_color: ClusterIP allocation score on pcmk-1: 50 native_color: ClusterIP allocation score on pcmk-2: 200 native_color: WebSite allocation score on pcmk-1: -INFINITY native_color: WebSite allocation score on pcmk-2: 100 Transition Summary:
6.8. Move Resources Manually
There are always times when an administrator needs to override the cluster and force resources to move to a specific location. In this example, we will force the WebSite to move to pcmk-1 by updating our previous location constraint with a score of INFINITY.
[root@pcmk-1 ~]# pcs constraint location WebSite prefers pcmk-1=INFINITY [root@pcmk-1 ~]# pcs constraint Location Constraints: Resource: WebSite Enabled on: pcmk-1 (score:INFINITY) Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 14:19:34 2014 Last change: Wed Dec 17 14:18:37 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 2 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Once we’ve finished whatever activity required us to move the resources to pcmk-1 (in our case nothing), we can then allow the cluster to resume normal operation by removing the new constraint. Since we previously configured a default stickiness, the resources will remain on pcmk-1.
First, use the --full
option to get the constraint’s ID:
[root@pcmk-1 ~]# pcs constraint --full Location Constraints: Resource: WebSite Enabled on: pcmk-1 (score:INFINITY) (id:location-WebSite-pcmk-1-INFINITY) Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) (id:order-ClusterIP-WebSite-mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) (id:colocation-WebSite-ClusterIP-INFINITY)
Then remove the desired contraint using its ID:
[root@pcmk-1 ~]# pcs constraint remove location-WebSite-pcmk-1-INFINITY [root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY)
Note that the location constraint is now gone. If we check the cluster status, we can also see that (as expected) the resources are still active on pcmk-1.
# pcs status Cluster name: mycluster Last updated: Wed Dec 17 14:25:21 2014 Last change: Wed Dec 17 14:24:29 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 2 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
[9] Compare the key used here, ocf:heartbeat:apache, with the one we used earlier for the IP address, ocf:heartbeat:IPaddr2
第 7 章 Replicate Storage Using DRBD
目录
7.2. Allocate a Disk Volume for DRBD
7.6. Configure the Cluster for the DRBD device
7.7. Configure the Cluster for the Filesystem
Even if you’re serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, it’s not even an option. Not everyone care afford network-attached storage, but somehow the data needs to be kept in sync.
Enter DRBD, which can be thought of as network-based RAID-1. [10]
7.1. 安装DRBD软件包
DRBD itself is included in the upstream kernel,[11] but we do need some utilities to use it effectively.
CentOS does not ship these utilities, so we need to enable a third-party repository to get them. Supported packages for many OSes are available from DRBD’s maker LINBIT, but here we’ll use the free ELRepo repository.
On both nodes, import the ELRepo package signing key, and enable the repository:
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Now, we can install the DRBD kernel module and utilities:
# yum install -y kmod-drbd84 drbd84-utils
重要
The version of drbd84-utils shipped with CentOS 7.1 has a bug in the Pacemaker integration script. Until a fix is packaged, download the affected script directly from the upstream, on both nodes:
# curl -o /usr/lib/ocf/resource.d/linbit/drbd 'http://git.linbit.com/gitweb.cgi?p=drbd-utils.git;a=blob_plain;f=scripts/drbd.ocf;h=cf6b966341377a993d1bf5f585a5b9fe72eaa5f2;hb=c11ba026bbbbc647b8112543df142f2185cb4b4b'
This is a temporary fix that will be overwritten if the package is upgraded.
DRBD will not be able to run under the default SELinux security policies. If you are familiar with SELinux, you can modify the policies in a more fine-grained manner, but here we will simply exempt DRBD processes from SELinux control:
# semanage permissive -a drbd_t
We will configure DRBD to use port 7789, so allow that port from each host to the other:
[root@pcmk-1 ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.122.102" port port="7789" protocol="tcp" accept' success [root@pcmk-1 ~]# firewall-cmd --reload success
[root@pcmk-2 ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.122.101" port port="7789" protocol="tcp" accept' success [root@pcmk-2 ~]# firewall-cmd --reload success
注意
In this example, we have only two nodes, and all network traffic is on the same LAN. In production, it is recommended to use a dedicated, isolated network for cluster-related traffic, so the firewall configuration would likely be different; one approach would be to add the dedicated network interfaces to the trusted zone.
7.2. Allocate a Disk Volume for DRBD
DRBD will need its own block device on each node. This can be a physical disk partition or logical volume, of whatever size you need for your data. For this document, we will use a 1GiB logical volume, which is more than sufficient for a single HTML file and (later) GFS2 metadata.
[root@pcmk-1 ~]# vgdisplay | grep -e Name -e Free VG Name centos_pcmk-1 Free PE / Size 382 / 1.49 GiB [root@pcmk-1 ~]# lvcreate --name drbd-demo --size 1G centos_pcmk-1 Logical volume "drbd-demo" created [root@pcmk-1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert drbd-demo centos_pcmk-1 -wi-a----- 1.00g root centos_pcmk-1 -wi-ao---- 5.00g swap centos_pcmk-1 -wi-ao---- 1.00g
Repeat for the second node, making sure to use the same size:
[root@pcmk-1 ~]# ssh pcmk-2 -- lvcreate --name drbd-demo --size 1G centos_pcmk-2 Logical volume "drbd-demo" created
7.3. 配置DRBD
There is no series of commands for building a DRBD configuration, so simply run this on both nodes to use this sample configuration:
# cat <<END >/etc/drbd.d/wwwdata.res resource wwwdata { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on pcmk-1 { disk /dev/centos_pcmk-1/drbd-demo; address 192.168.122.101:7789; } on pcmk-2 { disk /dev/centos_pcmk-2/drbd-demo; address 192.168.122.102:7789; } } END
重要
Edit the file to use the hostnames, IP addresses and logical volume paths of your nodes if they differ from the ones used in this guide.
注意
Detailed information on the directives used in this configuration (and other alternatives) is available athttp://www.drbd.org/users-guide/ch-configure.html
The allow-two-primaries option would not normally be used in an active/passive cluster. We are adding it here for the convenience of changing to an active/active cluster later.
7.4. Initialize DRBD
With the configuration in place, we can now get DRBD running.
These commands create the local metadata for the DRBD resource, ensure the DRBD kernel module is loaded, and bring up the DRBD resource. Run them on one node:
[root@pcmk-1 ~]# drbdadm create-md wwwdata initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. [root@pcmk-1 ~]# modprobe drbd [root@pcmk-1 ~]# drbdadm up wwwdata
We can confirm DRBD’s status on this node:
[root@pcmk-1 ~]# cat /proc/drbd version: 8.4.6 (api:1/proto:86-101) GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R7, 2015-04-10 05:13:52 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
Because we have not yet initialized the data, this node’s data is marked as Inconsistent. Because we have not yet initialized the second node, the local state is WFConnection (waiting for connection), and the partner node’s status is marked as Unknown.
Now, repeat the above commands on the second node. This time, when we check the status, it shows:
[root@pcmk-2 ~]# cat /proc/drbd version: 8.4.6 (api:1/proto:86-101) GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R7, 2015-04-10 05:13:52 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
You can see the state has changed to Connected, meaning the two DRBD nodes are communicating properly, and both nodes are in Secondary role with Inconsistent data.
To make the data consistent, we need to tell DRBD which node should be considered to have the correct data. In this case, since we are creating a new resource, both have garbage, so we’ll just pick pcmk-1 and run this command on it:
[root@pcmk-1 ~]# drbdadm primary --force wwwdata
注意
If you are using an older version of DRBD, the required syntax may be different. See the documentation for your version for how to perform these commands.
If we check the status immediately, we’ll see something like this:
[root@pcmk-1 ~]# cat /proc/drbd version: 8.4.6 (api:1/proto:86-101) GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R7, 2015-04-10 05:13:52 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:2872 nr:0 dw:0 dr:3784 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1045636 [>....................] sync'ed: 0.4% (1045636/1048508)K finish: 0:10:53 speed: 1,436 (1,436) K/sec
We can see that this node has the Primary role, the partner node has the Secondary role, this node’s data is now considered UpToDate, the partner node’s data is still Inconsistent, and a progress bar shows how far along the partner node is in synchronizing the data.
After a while, the sync should finish, and you’ll see something like:
[root@pcmk-1 ~]# cat /proc/drbd version: 8.4.6 (api:1/proto:86-101) GIT-hash: 833d830e0152d1e457fa7856e71e11248ccf3f70 build by phil@Build64R7, 2015-04-10 05:13:52 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:1048508 nr:0 dw:0 dr:1049420 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
Both sets of data are now UpToDate, and we can proceed to creating and populating a filesystem for our WebSite resource’s documents.
7.5. Populate the DRBD Disk
On the node with the primary role (pcmk-1 in this example), create a filesystem on the DRBD device:
[root@pcmk-1 ~]# mkfs.xfs /dev/drbd1 meta-data=/dev/drbd1 isize=256 agcount=4, agsize=65532 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=262127, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
注意
In this example, we create an xfs filesystem with no special options. In a production environment, you should choose a filesystem type and options that are suitable for your application.
Mount the newly created filesystem, populate it with our web document, give it the same SELinux policy as the web document root, then unmount it (the cluster will handle mounting and unmounting it later):
[root@pcmk-1 ~]# mount /dev/drbd1 /mnt [root@pcmk-1 ~]# cat <<-END >/mnt/index.html <html> <body>My Test Site - DRBD</body> </html> END [root@pcmk-1 ~]# chcon -R --reference=/var/www/html /mnt [root@pcmk-1 ~]# umount /dev/drbd1
7.6. Configure the Cluster for the DRBD device
One handy feature pcs
has is the ability to queue up several changes into a file and commit those changes atomically. To do this, start by populating the file with the current raw XML config from the CIB.
[root@pcmk-1 ~]# pcs cluster cib drbd_cfg
Using the pcs -f
option, make changes to the configuration saved in the drbd_cfg
file. These changes will not be seen by the cluster until the drbd_cfg
file is pushed into the live cluster’s CIB later.
Here, we create a cluster resource for the DRBD device, and an additional clone resource to allow the resource to run on both nodes at the same time.
[root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \ drbd_resource=wwwdata op monitor interval=60s [root@pcmk-1 ~]# pcs -f drbd_cfg resource master WebDataClone WebData \ master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \ notify=true [root@pcmk-1 ~]# pcs -f drbd_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Stopped: [ pcmk-1 pcmk-2 ]
After you are satisfied with all the changes, you can commit them all at once by pushing the drbd_cfg file into the live CIB.
[root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg CIB updated
注意
Early versions of pcs
required push cib
in place of cib-push
above.
Let’s see what the cluster did with the new configuration:
[root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 09:29:41 2015 Last change: Fri Aug 14 09:29:25 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 4 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
We can see that WebDataClone (our DRBD device) is running as master (DRBD’s primary role) on pcmk-1 and slave (DRBD’s secondary role) on pcmk-2.
重要
The resource agent should load the DRBD module when needed if it’s not already loaded. If that does not happen, configure your operating system to load the module at boot time. For CentOS 7.1, you would run this on both nodes:
# echo drbd >/etc/modules-load.d/drbd.conf
7.7. Configure the Cluster for the Filesystem
Now that we have a working DRBD device, we need to mount its filesystem.
In addition to defining the filesystem, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted).
We are going to take a shortcut when creating the resource this time. Instead of explicitly saying we want theocf:heartbeat:Filesystem script, we are only going to ask for Filesystem. We can do this because we know there is only one resource script named Filesystem available to pacemaker, and that pcs is smart enough to fill in the ocf:heartbeat:portion for us correctly in the configuration. If there were multiple Filesystem scripts from different OCF providers, we would need to specify the exact one we wanted.
Once again, we will queue our changes to a file and then push the new configuration to the cluster as the final step.
[root@pcmk-1 ~]# pcs cluster cib fs_cfg [root@pcmk-1 ~]# pcs -f fs_cfg resource create WebFS Filesystem \ device="/dev/drbd1" directory="/var/www/html" fstype="xfs" [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebFS with WebDataClone INFINITY with-rsc-role=Master [root@pcmk-1 ~]# pcs -f fs_cfg constraint order promote WebDataClone then start WebFS Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start)
我们也要告诉集群Apache也要运行在同样的节点上,而且文件系统要在Apache之前启动。
[root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite with WebFS INFINITY [root@pcmk-1 ~]# pcs -f fs_cfg constraint order WebFS then WebSite Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start)
Review the updated configuration.
[root@pcmk-1 ~]# pcs -f fs_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY)
[root@pcmk-1 ~]# pcs -f fs_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped
After reviewing the new configuration, upload it and watch the cluster put it into effect.
[root@pcmk-1 ~]# pcs cluster cib-push fs_cfg [root@pcmk-1 ~]# pcs status Last updated: Fri Aug 14 09:34:11 2015 Last change: Fri Aug 14 09:34:09 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
7.8. Test Cluster Failover
Previously, we used pcs cluster stop pcmk-1
to stop all cluster services on pcmk-1, failing over the cluster resources, but there is another way to safely simulate node failure.
We can put the node into standby mode. Nodes in this state continue to run corosync and pacemaker but are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when performing system administration tasks such as updating packages used by cluster resources.
Put the active node into standby mode, and observe the cluster move all the resources to the other node. The node’s status will change to indicate that it can no longer host resources.
[root@pcmk-1 ~]# pcs cluster standby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 09:36:49 2015 Last change: Fri Aug 14 09:36:43 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 5 Resources configured Node pcmk-1 (1): standby Online: [ pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Stopped: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
当我在pcmk-1上面操作完了--本例中没有任何操作,我们只是想让资源移动移动--我们可以让节点变回正常的集群成员。
[root@pcmk-1 ~]# pcs cluster unstandby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 09:38:02 2015 Last change: Fri Aug 14 09:37:56 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Notice that pcmk-1 is back to the Online state, and that the cluster resources stay where they are due to our resource stickiness settings configured earlier.
[10] See http://www.drbd.org/ for details.
[11] Since version 2.6.33
第 8 章 配置 STONITH
目录
8.3. Configure the Cluster for STONITH
8.1. What is STONITH?
STONITH (Shoot The Other Node In The Head aka. fencing) protects your data from being corrupted by rogue nodes or unintended concurrent access.
Just because a node is unresponsive doesn’t mean it has stopped accessing your data. The only way to be 100% sure that your data is safe, is to use STONITH to ensure that the node is truly offline before allowing the data to be accessed from another node.
STONITH另外一个用场是在当集群服务无法停止的时候。这个时候,集群可以用STONITH来强制使节点下线,从而可以安全的得在其他地方启动服务。
8.2. Choose a STONITH Device
It is crucial that your STONITH device can allow the cluster to differentiate between a node failure and a network failure.
A common mistake people make when choosing a STONITH device is to use a remote power switch (such as many on-board IPMI controllers) that shares power with the node it controls. If the power fails in such a case, the cluster cannot be sure whether the node is really offline, or active and suffering from a network fault, so the cluster will stop all resources to avoid a possible split-brain situation.
Likewise, any device that relies on the machine being active (such as SSH-based "devices" sometimes used during testing) is inappropriate.
8.3. Configure the Cluster for STONITH
-
Install the STONITH agent(s). To see what packages are available, run
yum search fence-
. Be sure to install the package(s) on all cluster nodes. -
Configure the STONITH device itself to be able to fence your nodes and accept fencing requests. This includes any necessary configuration on the device and on the nodes, and any firewall or SELinux changes needed. Test the communication between the device and your nodes.
-
Find the correct STONITH agent script:
pcs stonith list
-
Find the parameters associated with the device:
pcs stonith describe
agent_name
-
Create a local copy of the CIB:
pcs cluster cib stonith_cfg
-
Create the fencing resource:
pcs -f stonith_cfg stonith create
stonith_id stonith_device_type [stonith_device_options]
Any flags that do not take arguments, such as
--ssl
, should be passed asssl=1
. -
Enable STONITH in the cluster:
pcs -f stonith_cfg property set stonith-enabled=true
-
If the device does not know how to fence nodes based on their uname, you may also need to set the specialpcmk_host_map parameter. See
man stonithd
for details. -
If the device does not support the list command, you may also need to set the special pcmk_host_list and/orpcmk_host_check parameters. See
man stonithd
for details. -
If the device does not expect the victim to be specified with the port parameter, you may also need to set the special pcmk_host_argument parameter. See
man stonithd
for details. -
Commit the new configuration:
pcs cluster cib-push stonith_cfg
-
Once the STONITH resource is running, test it (you might want to stop the cluster on that machine first):
stonith_admin --reboot
nodename
8.4. 例子
For this example, assume we have a chassis containing four nodes and an IPMI device active on 10.0.0.1. Following the steps above would go something like this:
Step 1: Install the fence-agents-ipmilan package on both nodes.
Step 2: Configure the IP address, authentication credentials, etc. in the IPMI device itself.
Step 3: Choose the fence_ipmilan STONITH agent.
Step 4: Obtain the agent’s possible parameters:
[root@pcmk-1 ~]# pcs stonith describe fence_ipmilan Stonith options for: fence_ipmilan ipport: TCP/UDP port to use for connection with device inet6_only: Forces agent to use IPv6 addresses only ipaddr (required): IP Address or Hostname passwd_script: Script to retrieve password method: Method to fence (onoff|cycle) inet4_only: Forces agent to use IPv4 addresses only passwd: Login password or passphrase lanplus: Use Lanplus to improve security of connection auth: IPMI Lan Auth type. cipher: Ciphersuite to use (same as ipmitool -C parameter) privlvl: Privilege level on IPMI device action (required): Fencing Action login: Login Name verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit power_wait: Wait X seconds after issuing ON/OFF login_timeout: Wait X seconds for cmd prompt after login power_timeout: Test X seconds for status change after ON/OFF delay: Wait X seconds before fencing is started ipmitool_path: Path to ipmitool binary shell_timeout: Wait X seconds for cmd prompt after issuing command retry_on: Count of attempts to retry power on sudo: Use sudo (without password) when calling 3rd party sotfware. stonith-timeout: How long to wait for the STONITH action (reboot, on, off) to complete per a stonith device. priority: The priority of the stonith resource. Devices are tried in order of highest priority to lowest. pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list). pcmk_host_check: How to determine which machines are controlled by the device.
Step 5: pcs cluster cib stonith_cfg
Step 6: Here are example parameters for creating our STONITH resource:
[root@pcmk-1 ~]# pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \ pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \ passwd=acd123 op monitor interval=60s [root@pcmk-1 ~]# pcs -f stonith_cfg stonith ipmi-fencing (stonith:fence_ipmilan): Stopped
Steps 7-10: Enable STONITH in the cluster:
[root@pcmk-1 ~]# pcs -f stonith_cfg property set stonith-enabled=true [root@pcmk-1 ~]# pcs -f stonith_cfg property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.12-a14efad have-watchdog: false stonith-enabled: true
Step 11: pcs cluster cib-push stonith_cfg
Step 12: Test:
[root@pcmk-1 ~]# pcs cluster stop pcmk-2 [root@pcmk-1 ~]# stonith_admin --reboot pcmk-2
After a successful test, login to any rebooted nodes, and start the cluster (with pcs cluster start
).
第 9 章 Convert Cluster to Active/Active
目录
9.1. Install Cluster Filesystem Software
9.2. Configure the Cluster for the DLM
9.3. Create and Populate GFS2 Filesystem
9.6. Clone the Filesystem and Apache Resources
The primary requirement for an Active/Active cluster is that the data required for your services is available, simultaneously, on both machines. Pacemaker makes no requirement on how this is achieved; you could use a SAN if you had one available, but since DRBD supports multiple Primaries, we can continue to use it here.
9.1. Install Cluster Filesystem Software
The only hitch is that we need to use a cluster-aware filesystem. The one we used earlier with DRBD, xfs, is not one of those. Both OCFS2 and GFS2 are supported; here, we will use GFS2.
On both nodes, install the GFS2 command-line utilities and the Distributed Lock Manager (DLM) required by cluster filesystems:
# yum install -y gfs2-utils dlm
9.2. Configure the Cluster for the DLM
The DLM needs to run on both nodes, so we’ll start by creating a resource for it (using the ocf:pacemaker:controldresource script), and clone it:
[root@pcmk-1 ~]# pcs cluster cib dlm_cfg [root@pcmk-1 ~]# pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s [root@pcmk-1 ~]# pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1 [root@pcmk-1 ~]# pcs -f dlm_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 ]
Activate our new configuration, and see how the cluster responds:
[root@pcmk-1 ~]# pcs cluster cib-push dlm_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 11:19:36 2015 Last change: Fri Aug 14 11:19:28 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 8 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
9.3. Create and Populate GFS2 Filesystem
在我们对一个已存在的分区做任何操作之前,我们要确保它没有被挂载。我们通过告诉集群停止WebFS这个资源来实现这点。这将会确保其他使用WebFS的资源会正确的依次关闭。
[root@pcmk-1 ~]# pcs resource disable WebFS [root@pcmk-1 ~]# pcs resource ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Stopped Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ]
You can see that both Apache and WebFS have been stopped, and that pcmk-2 is the current master for the DRBD device.
Now we can create a new GFS2 filesystem on the DRBD device.
警告
这个操作会清除DRBD分区上面的所有数据,请备份重要的数据。
重要
Run the next command on whichever node has the DRBD Primary role. Otherwise, you will receive the message:
/dev/drbd1: Read-only file system
[root@pcmk-2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:web /dev/drbd1 It appears to contain an existing filesystem (xfs) This will destroy any data on /dev/drbd1 Are you sure you want to proceed? [y/n]y Device: /dev/drbd1 Block size: 4096 Device size: 1.00 GB (262127 blocks) Filesystem size: 1.00 GB (262126 blocks) Journals: 2 Resource groups: 5 Locking protocol: "lock_dlm" Lock table: "mycluster:web" UUID: 9a72c488-d8a7-24c9-ceee-add7a8ca52c2
The mkfs.gfs2
command required a number of additional parameters:
-
-p lock_dlm
specifies that we want to use the kernel’s DLM. -
-j 2
indicates that the filesystem should reserve enough space for two journals (one for each node that will access the filesystem). -
-t mycluster:web
specifies the lock table name. The format for this field is
. Forclustername:fsname
, we need to use the same value we specified originally withclustername
pcs cluster setup --name
(which is also the value of cluster_name in/etc/corosync/corosync.conf
). If you are unsure what your cluster name is, you can look in/etc/corosync/corosync.conf
or execute the commandpcs cluster corosync pcmk-1 | grep cluster_name
.
Now we can (re-)populate the new filesystem with data (web pages). We’ll create yet another variation on our home page.
[root@pcmk-2 ~]# mount /dev/drbd1 /mnt [root@pcmk-2 ~]# cat <<-END >/mnt/index.html <html> <body>My Test Site - GFS2</body> </html> END [root@pcmk-2 ~]# chcon -R --reference=/var/www/html /mnt [root@pcmk-2 ~]# umount /dev/drbd1 [root@pcmk-2 ~]# drbdadm verify wwwdata
9.4. 8.5. 重新为集群配置GFS2
With the WebFS resource stopped, let’s update the configuration.
[root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=xfs Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20)
The fstype option needs to be updated to gfs2 instead of xfs.
[root@pcmk-1 ~]# pcs resource update WebFS fstype=gfs2 [root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20)
GFS2 requires that DLM be running, so we also need to set up new colocation and ordering constraints for it:
[root@pcmk-1 ~]# pcs constraint colocation add WebFS with dlm-clone INFINITY [root@pcmk-1 ~]# pcs constraint order dlm-clone then WebFS Adding dlm-clone WebFS (kind: Mandatory) (Options: first-action=start then-action=start)
9.5. Clone the IP address
There’s no point making the services active on both locations if we can’t reach them both, so let’s clone the IP address.
The IPaddr2 resource agent has built-in intelligence for when it is configured as a clone. It will utilize a multicast MAC address to have the local switch send the relevant packets to all nodes in the cluster, together with iptables clusteriprules on the nodes so that any given packet will be grabbed by exactly one node. This will give us a simple but effective form of load-balancing requests between our two nodes.
Let’s start a new config, and clone our IP:
[root@pcmk-1 ~]# pcs cluster cib loadbalance_cfg [root@pcmk-1 ~]# pcs -f loadbalance_cfg resource clone ClusterIP \ clone-max=2 clone-node-max=2 globally-unique=true
-
clone-max=2
tells the resource agent to split packets this many ways. This should equal the number of nodes that can host the IP. -
clone-node-max=2
says that one node can run up to 2 instances of the clone. This should also equal the number of nodes that can host the IP, so that if any node goes down, another node can take over the failed node’s "request bucket". Otherwise, requests intended for the failed node would be discarded. -
globally-unique=true
tells the cluster that one clone isn’t identical to another (each handles a different "bucket"). This also tells the resource agent to insert iptables rules so each host only processes packets in its bucket(s).
Notice that when the ClusterIP becomes a clone, the constraints referencing ClusterIP now reference the clone. This is done automatically by pcs.
[root@pcmk-1 ~]# pcs -f loadbalance_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) start dlm-clone then start WebFS (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP-clone (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY) WebFS with dlm-clone (score:INFINITY)
Now we must tell the resource how to decide which requests are processed by which hosts. To do this, we specify theclusterip_hash parameter. The value of sourceip means that the source IP address of incoming packets will be hashed; each node will process a certain range of hashes.
[root@pcmk-1 ~]# pcs -f loadbalance_cfg resource update ClusterIP clusterip_hash=sourceip
Load our configuration to the cluster, and see how it responds.
[root@pcmk-1 ~]# pcs cluster cib-push loadbalance_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 11:32:07 2015 Last change: Fri Aug 14 11:32:04 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 9 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
If desired, you can demonstrate that all request buckets are working by using a tool such as arping
from several source hosts to see which host responds to each.
9.6. Clone the Filesystem and Apache Resources
Now that we have a cluster filesystem ready to go, and our nodes can load-balance requests to a shared IP address, we can configure the cluster so both nodes mount the filesystem and respond to web requests.
Clone the filesystem and Apache resources in a new configuration. Notice how pcs automatically updates the relevant constraints again.
[root@pcmk-1 ~]# pcs cluster cib active_cfg [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebFS [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebSite [root@pcmk-1 ~]# pcs -f active_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite-clone (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite-clone (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite-clone with ClusterIP-clone (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite-clone with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY)
Tell the cluster that it is now allowed to promote both instances to be DRBD Primary (aka. master).
[root@pcmk-1 ~]# pcs -f active_cfg resource update WebDataClone master-max=2
Finally, load our configuration to the cluster, and re-enable the WebFS resource (which we disabled earlier).
[root@pcmk-1 ~]# pcs cluster cib-push active_cfg CIB updated [root@pcmk-1 ~]# pcs resource enable WebFS
After all the processes are started, the status should look similar to this.
[root@pcmk-1 ~]# pcs resource Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started ClusterIP:1 (ocf::heartbeat:IPaddr2): Started Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ]
9.7. Test Failover
Testing failover is left as an exercise for the reader. For example, you can put one node into standby mode, use pcs status
to confirm that its ClusterIP clone was moved to the other node, and use arping
to verify that packets are not being lost from any source host.
注意
You may find that when a failed node rejoins the cluster, both ClusterIP clones stay on one node, due to the resource stickiness. While this works fine, it effectively eliminates load-balancing and returns the cluster to an active-passive setup again. You can avoid this by disabling stickiness for the IP address resource:
[root@pcmk-1 ~]# pcs resource meta ClusterIP resource-stickiness=0
配置扼要重述
目录
A.1. 最终的集群配置文件
[root@pcmk-1 ~]# pcs resource Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started ClusterIP:1 (ocf::heartbeat:IPaddr2): Started Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ]
[root@pcmk-1 ~]# pcs resource op defaults timeout: 240s
[root@pcmk-1 ~]# pcs stonith impi-fencing (stonith:fence_ipmilan) Started
[root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite-clone (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite-clone (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite-clone with ClusterIP-clone (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite-clone with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY)
[root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Fri Aug 14 12:05:37 2015 Last change: Fri Aug 14 11:49:29 2015 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 11 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: impi-fencing (stonith:fence_ipmilan): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-2 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-1 Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
[root@pcmk-1 ~]# pcs cluster cib
<cib crm_feature_set="3.0.9" validate-with="pacemaker-2.3" epoch="51" num_updates="16" admin_epoch="0" cib-last-written="Fri Aug 14 11:49:29 2015" have-quorum="1" dc-uuid="1"> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.12-a14efad"/> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/> <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="mycluster"/> <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1419129162"/> <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/> </cluster_property_set> </crm_config> <nodes> <node id="1" uname="pcmk-1"> <instance_attributes id="nodes-1"/> </node> <node id="2" uname="pcmk-2"> <instance_attributes id="nodes-2"/> </node> </nodes> <resources> <primitive class="stonith" id="impi-fencing" type="fence_ipmilan"> <instance_attributes id="impi-fencing-instance_attributes"> <nvpair id="impi-fencing-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="pcmk-1 pcmk-2"/> <nvpair id="impi-fencing-instance_attributes-ipaddr" name="ipaddr" value="10.0.0.1"/> <nvpair id="impi-fencing-instance_attributes-login" name="login" value="testuser"/> <nvpair id="impi-fencing-instance_attributes-passwd" name="passwd" value="acd123"/> </instance_attributes> <operations> <op id="impi-fencing-interval-60s" interval="60s" name="monitor"/> </operations> </primitive> <master id="WebDataClone"> <primitive class="ocf" id="WebData" provider="linbit" type="drbd"> <instance_attributes id="WebData-instance_attributes"> <nvpair id="WebData-instance_attributes-drbd_resource" name="drbd_resource" value="wwwdata"/> </instance_attributes> <operations> <op id="WebData-start-timeout-240" interval="0s" name="start" timeout="240"/> <op id="WebData-promote-timeout-90" interval="0s" name="promote" timeout="90"/> <op id="WebData-demote-timeout-90" interval="0s" name="demote" timeout="90"/> <op id="WebData-stop-timeout-100" interval="0s" name="stop" timeout="100"/> <op id="WebData-monitor-interval-60s" interval="60s" name="monitor"/> </operations> </primitive> <meta_attributes id="WebDataClone-meta_attributes"> <nvpair id="WebDataClone-meta_attributes-master-max" name="master-max" value="2"/> <nvpair id="WebDataClone-meta_attributes-master-node-max" name="master-node-max" value="1"/> <nvpair id="WebDataClone-meta_attributes-clone-max" name="clone-max" value="2"/> <nvpair id="WebDataClone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/> <nvpair id="WebDataClone-meta_attributes-notify" name="notify" value="true"/> </meta_attributes> </master> <clone id="dlm-clone"> <primitive class="ocf" id="dlm" provider="pacemaker" type="controld"> <instance_attributes id="dlm-instance_attributes"/> <operations> <op id="dlm-start-timeout-90" interval="0s" name="start" timeout="90"/> <op id="dlm-stop-timeout-100" interval="0s" name="stop" timeout="100"/> <op id="dlm-monitor-interval-60s" interval="60s" name="monitor"/> </operations> </primitive> <meta_attributes id="dlm-clone-meta"> <nvpair id="dlm-clone-max" name="clone-max" value="2"/> <nvpair id="dlm-clone-node-max" name="clone-node-max" value="1"/> </meta_attributes> </clone> <clone id="ClusterIP-clone"> <primitive class="ocf" id="ClusterIP" provider="heartbeat" type="IPaddr2"> <instance_attributes id="ClusterIP-instance_attributes"> <nvpair id="ClusterIP-instance_attributes-ip" name="ip" value="192.168.122.120"/> <nvpair id="ClusterIP-instance_attributes-cidr_netmask" name="cidr_netmask" value="32"/> <nvpair id="ClusterIP-instance_attributes-clusterip_hash" name="clusterip_hash" value="sourceip"/> </instance_attributes> <operations> <op id="ClusterIP-start-timeout-20s" interval="0s" name="start" timeout="20s"/> <op id="ClusterIP-stop-timeout-20s" interval="0s" name="stop" timeout="20s"/> <op id="ClusterIP-monitor-interval-30s" interval="30s" name="monitor"/> </operations> <meta_attributes id="ClusterIP-meta_attributes"/> </primitive> <meta_attributes id="ClusterIP-clone-meta"> <nvpair id="ClusterIP-clone-max" name="clone-max" value="2"/> <nvpair id="ClusterIP-clone-node-max" name="clone-node-max" value="2"/> <nvpair id="ClusterIP-globally-unique" name="globally-unique" value="true"/> </meta_attributes> </clone> <clone id="WebFS-clone"> <primitive class="ocf" id="WebFS" provider="heartbeat" type="Filesystem"> <instance_attributes id="WebFS-instance_attributes"> <nvpair id="WebFS-instance_attributes-device" name="device" value="/dev/drbd1"/> <nvpair id="WebFS-instance_attributes-directory" name="directory" value="/var/www/html"/> <nvpair id="WebFS-instance_attributes-fstype" name="fstype" value="gfs2"/> </instance_attributes> <operations> <op id="WebFS-start-timeout-60" interval="0s" name="start" timeout="60"/> <op id="WebFS-stop-timeout-60" interval="0s" name="stop" timeout="60"/> <op id="WebFS-monitor-interval-20" interval="20" name="monitor" timeout="40"/> </operations> <meta_attributes id="WebFS-meta_attributes"/> </primitive> <meta_attributes id="WebFS-clone-meta"/> </clone> <clone id="WebSite-clone"> <primitive class="ocf" id="WebSite" provider="heartbeat" type="apache"> <instance_attributes id="WebSite-instance_attributes"> <nvpair id="WebSite-instance_attributes-configfile" name="configfile" value="/etc/httpd/conf/httpd.conf"/> <nvpair id="WebSite-instance_attributes-statusurl" name="statusurl" value="http://localhost/server-status"/> </instance_attributes> <operations> <op id="WebSite-start-timeout-40s" interval="0s" name="start" timeout="40s"/> <op id="WebSite-stop-timeout-60s" interval="0s" name="stop" timeout="60s"/> <op id="WebSite-monitor-interval-1min" interval="1min" name="monitor"/> </operations> </primitive> <meta_attributes id="WebSite-clone-meta"/> </clone> </resources> <constraints> <rsc_colocation id="colocation-WebSite-ClusterIP-INFINITY" rsc="WebSite-clone" score="INFINITY" with-rsc="ClusterIP-clone"/> <rsc_order first="ClusterIP-clone" first-action="start" id="order-ClusterIP-WebSite-mandatory" then="WebSite-clone" then-action="start"/> <rsc_colocation id="colocation-WebFS-WebDataClone-INFINITY" rsc="WebFS-clone" score="INFINITY" with-rsc="WebDataClone" with-rsc-role="Master"/> <rsc_order first="WebDataClone" first-action="promote" id="order-WebDataClone-WebFS-mandatory" then="WebFS-clone" then-action="start"/> <rsc_colocation id="colocation-WebSite-WebFS-INFINITY" rsc="WebSite-clone" score="INFINITY" with-rsc="WebFS-clone"/> <rsc_order first="WebFS-clone" first-action="start" id="order-WebFS-WebSite-mandatory" then="WebSite-clone" then-action="start"/> <rsc_colocation id="colocation-WebFS-clone-dlm-clone-INFINITY" rsc="WebFS-clone" score="INFINITY" with-rsc="dlm-clone"/> <rsc_order first="dlm-clone" first-action="start" id="order-dlm-clone-WebFS-clone-mandatory" then="WebFS-clone" then-action="start"/> </constraints> <rsc_defaults> <meta_attributes id="rsc_defaults-options"> <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="100"/> </meta_attributes> </rsc_defaults> <op_defaults> <meta_attributes id="op_defaults-options"> <nvpair id="op_defaults-options-timeout" name="timeout" value="240s"/> </meta_attributes> </op_defaults> </configuration> <status> <node_state id="1" uname="pcmk-1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"> <lrm id="1"> <lrm_resources> <lrm_resource id="WebData" type="drbd" class="ocf" provider="linbit"> <lrm_rsc_op id="WebData_last_0" operation_key="WebData_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="13:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;13:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="44" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="26" queue-time="0" op-digest="bc5c2e08730036ec602d79a958821da4" on_node="pcmk-1"/> </lrm_resource> <lrm_resource id="dlm" type="controld" class="ocf" provider="pacemaker"> <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="37:2:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;37:2:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="37" rc-code="0" op-status="0" interval="0" last-run="1419264506" last-rc-change="1419264506" exec-time="1041" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" on_node="pcmk-1"/> <lrm_rsc_op id="dlm_monitor_60000" operation_key="dlm_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="39:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;39:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="38" rc-code="0" op-status="0" interval="60000" last-rc-change="1419264507" exec-time="11" queue-time="0" op-digest="968cc450c09e98fdac3043cb6a194d3d" on_node="pcmk-1"/> </lrm_resource> <lrm_resource id="ClusterIP:0" type="IPaddr2" class="ocf" provider="heartbeat"> <lrm_rsc_op id="ClusterIP:0_last_0" operation_key="ClusterIP:0_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="7:0:7:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:7;7:0:7:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="19" rc-code="7" op-status="0" interval="0" last-run="1419264506" last-rc-change="1419264506" exec-time="28" queue-time="0" op-digest="ac61ecc765070218997f6d876fa1d76c" on_node="pcmk-1"/> </lrm_resource> <lrm_resource id="ClusterIP:1" type="IPaddr2" class="ocf" provider="heartbeat"> <lrm_rsc_op id="ClusterIP:1_last_0" operation_key="ClusterIP:1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="49:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;49:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="40" rc-code="0" op-status="0" interval="0" last-run="1419264507" last-rc-change="1419264507" exec-time="190" queue-time="0" op-digest="ac61ecc765070218997f6d876fa1d76c" on_node="pcmk-1"/> <lrm_rsc_op id="ClusterIP:1_monitor_30000" operation_key="ClusterIP:1_monitor_30000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="50:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;50:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="41" rc-code="0" op-status="0" interval="30000" last-rc-change="1419264507" exec-time="27" queue-time="0" op-digest="8ce33853c31576b708595f1d8a4a215c" on_node="pcmk-1"/> </lrm_resource> <lrm_resource id="WebFS" type="Filesystem" class="ocf" provider="heartbeat"> <lrm_rsc_op id="WebFS_last_0" operation_key="WebFS_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="62:5:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;62:5:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="46" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="585" queue-time="0" op-digest="9d797b0e3b7f9729195992c0dafb5a9e" on_node="pcmk-1"/> <lrm_rsc_op id="WebFS_monitor_20000" operation_key="WebFS_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="62:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;62:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="47" rc-code="0" op-status="0" interval="20000" last-rc-change="1419264508" exec-time="21" queue-time="1" op-digest="099af723b175851f09e5391e0c13854e" on_node="pcmk-1"/> </lrm_resource> <lrm_resource id="WebSite" type="apache" class="ocf" provider="heartbeat"> <lrm_rsc_op id="WebSite_last_0" operation_key="WebSite_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="72:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;72:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="48" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="65" queue-time="0" op-digest="49ba395a3f2c142631c2ef2c431a29d9" on_node="pcmk-1"/> <lrm_rsc_op id="WebSite_monitor_60000" operation_key="WebSite_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="73:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;73:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="49" rc-code="0" op-status="0" interval="60000" last-rc-change="1419264508" exec-time="26" queue-time="0" op-digest="eddc33bef3f1592ad847638ee485316f" on_node="pcmk-1"/> </lrm_resource> </lrm_resources> </lrm> <transient_attributes id="1"> <instance_attributes id="status-1"> <nvpair id="status-1-shutdown" name="shutdown" value="0"/> <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/> <nvpair id="status-1-master-WebData" name="master-WebData" value="10000"/> </instance_attributes> </transient_attributes> </node_state> <node_state id="2" uname="pcmk-2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"> <transient_attributes id="2"> <instance_attributes id="status-2"> <nvpair id="status-2-shutdown" name="shutdown" value="0"/> <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/> <nvpair id="status-2-master-WebData" name="master-WebData" value="10000"/> </instance_attributes> </transient_attributes> <lrm id="2"> <lrm_resources> <lrm_resource id="WebData" type="drbd" class="ocf" provider="linbit"> <lrm_rsc_op id="WebData_last_0" operation_key="WebData_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="16:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;16:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="41" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="26" queue-time="0" op-digest="bc5c2e08730036ec602d79a958821da4" on_node="pcmk-2"/> </lrm_resource> <lrm_resource id="dlm" type="controld" class="ocf" provider="pacemaker"> <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="35:2:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;35:2:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="34" rc-code="0" op-status="0" interval="0" last-run="1419264506" last-rc-change="1419264506" exec-time="1053" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" on_node="pcmk-2"/> <lrm_rsc_op id="dlm_monitor_60000" operation_key="dlm_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="42:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;42:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="35" rc-code="0" op-status="0" interval="60000" last-rc-change="1419264507" exec-time="19" queue-time="0" op-digest="968cc450c09e98fdac3043cb6a194d3d" on_node="pcmk-2"/> </lrm_resource> <lrm_resource id="ClusterIP:0" type="IPaddr2" class="ocf" provider="heartbeat"> <lrm_rsc_op id="ClusterIP:0_last_0" operation_key="ClusterIP:0_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="47:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;47:3:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="36" rc-code="0" op-status="0" interval="0" last-run="1419264507" last-rc-change="1419264507" exec-time="237" queue-time="0" op-digest="ac61ecc765070218997f6d876fa1d76c" on_node="pcmk-2"/> <lrm_rsc_op id="ClusterIP:0_monitor_30000" operation_key="ClusterIP:0_monitor_30000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="51:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;51:4:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="39" rc-code="0" op-status="0" interval="30000" last-rc-change="1419264507" exec-time="34" queue-time="0" op-digest="8ce33853c31576b708595f1d8a4a215c" on_node="pcmk-2"/> </lrm_resource> <lrm_resource id="ClusterIP:1" type="IPaddr2" class="ocf" provider="heartbeat"> <lrm_rsc_op id="ClusterIP:1_last_0" operation_key="ClusterIP:1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="16:0:7:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:7;16:0:7:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="23" rc-code="7" op-status="0" interval="0" last-run="1419264506" last-rc-change="1419264506" exec-time="28" queue-time="0" op-digest="ac61ecc765070218997f6d876fa1d76c" on_node="pcmk-2"/> </lrm_resource> <lrm_resource id="WebFS" type="Filesystem" class="ocf" provider="heartbeat"> <lrm_rsc_op id="WebFS_last_0" operation_key="WebFS_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="60:5:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;60:5:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="43" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="662" queue-time="0" op-digest="9d797b0e3b7f9729195992c0dafb5a9e" on_node="pcmk-2"/> <lrm_rsc_op id="WebFS_monitor_20000" operation_key="WebFS_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="65:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;65:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="44" rc-code="0" op-status="0" interval="20000" last-rc-change="1419264508" exec-time="29" queue-time="0" op-digest="099af723b175851f09e5391e0c13854e" on_node="pcmk-2"/> </lrm_resource> <lrm_resource id="WebSite" type="apache" class="ocf" provider="heartbeat"> <lrm_rsc_op id="WebSite_last_0" operation_key="WebSite_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="70:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;70:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="45" rc-code="0" op-status="0" interval="0" last-run="1419264508" last-rc-change="1419264508" exec-time="64" queue-time="0" op-digest="49ba395a3f2c142631c2ef2c431a29d9" on_node="pcmk-2"/> <lrm_rsc_op id="WebSite_monitor_60000" operation_key="WebSite_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="71:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" transition-magic="0:0;71:6:0:225c8bc5-8fb0-49b6-9f75-337085b080de" call-id="46" rc-code="0" op-status="0" interval="60000" last-rc-change="1419264508" exec-time="28" queue-time="0" op-digest="eddc33bef3f1592ad847638ee485316f" on_node="pcmk-2"/> </lrm_resource> </lrm_resources> </lrm> </node_state> </status> </cib>
A.2. 节点列表
[root@pcmk-1 ~]# pcs status nodes Pacemaker Nodes: Online: pcmk-1 pcmk-2 Standby: Offline:
A.3. 集群选项
[root@pcmk-1 ~]# pcs property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.12-a14efad have-watchdog: false last-lrm-refresh: 1439569053 stonith-enabled: true
The output shows state information automatically obtained about the cluster, including:
-
cluster-infrastructure - the cluster communications layer in use (heartbeat or corosync)
-
cluster-name - the cluster name chosen by the administrator when the cluster was created
-
dc-version - the version (including upstream source-code hash) of Pacemaker used on the Designated Controller
The output also shows options set by the administrator that control the way the cluster operates, including:
-
stonith-enabled=true - whether the cluster is allowed to use STONITH resources
A.4. 资源
A.4.1. 默认选项
[root@pcmk-1 ~]# pcs resource defaults resource-stickiness: 100
This shows cluster option defaults that apply to every resource that does not explicitly set the option itself. Above:
-
resource-stickiness - Specify the aversion to moving healthy resources to other machines
A.4.2. 隔离
[root@pcmk-1 ~]# pcs stonith show ipmi-fencing (stonith:fence_ipmilan) Started [root@pcmk-1 ~]# pcs stonith show ipmi-fencing Resource: ipmi-fencing (class=stonith type=fence_ipmilan) Attributes: ipaddr="10.0.0.1" login="testuser" passwd="acd123" pcmk_host_list="pcmk-1 pcmk-2" Operations: monitor interval=60s (fence-monitor-interval-60s)
A.4.3. 服务地址
Users of the services provided by the cluster require an unchanging address with which to access it. Additionally, we cloned the address so it will be active on both nodes. An iptables rule (created as part of the resource agent) is used to ensure that each request only gets processed by one of the two clone instances. The additional meta options tell the cluster that we want two instances of the clone (one "request bucket" for each node) and that if one node fails, then the remaining node should hold both.
[root@pcmk-1 ~]# pcs resource show ClusterIP-clone Clone: ClusterIP-clone Meta Attrs: clone-max=2 clone-node-max=2 globally-unique=true Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.122.120 cidr_netmask=32 clusterip_hash=sourceip Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s) stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s) monitor interval=30s (ClusterIP-monitor-interval-30s)
A.4.4. DRBD - 共享存储
Here, we define the DRBD service and specify which DRBD resource (from /etc/drbd.d/*.res) it should manage. We make it a master/slave resource and, in order to have an active/active setup, allow both instances to be promoted to master at the same time. We also set the notify option so that the cluster will tell DRBD agent when its peer changes state.
[root@pcmk-1 ~]# pcs resource show WebDataClone Master: WebDataClone Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true Resource: WebData (class=ocf provider=linbit type=drbd) Attributes: drbd_resource=wwwdata Operations: start interval=0s timeout=240 (WebData-start-timeout-240) promote interval=0s timeout=90 (WebData-promote-timeout-90) demote interval=0s timeout=90 (WebData-demote-timeout-90) stop interval=0s timeout=100 (WebData-stop-timeout-100) monitor interval=60s (WebData-monitor-interval-60s) [root@pcmk-1 ~]# pcs constraint ref WebDataClone Resource: WebDataClone colocation-WebFS-WebDataClone-INFINITY order-WebDataClone-WebFS-mandatory
A.4.5. 集群文件系统
The cluster filesystem ensures that files are read and written correctly. We need to specify the block device (provided by DRBD), where we want it mounted and that we are using GFS2. Again, it is a clone because it is intended to be active on both nodes. The additional constraints ensure that it can only be started on nodes with active DLM and DRBD instances.
[root@pcmk-1 ~]# pcs resource show WebFS-clone Clone: WebFS-clone Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) [root@pcmk-1 ~]# pcs constraint ref WebFS-clone Resource: WebFS-clone colocation-WebFS-WebDataClone-INFINITY colocation-WebSite-WebFS-INFINITY colocation-WebFS-clone-dlm-clone-INFINITY order-WebDataClone-WebFS-mandatory order-WebFS-WebSite-mandatory order-dlm-clone-WebFS-clone-mandatory
A.4.6. Apache
Lastly, we have the actual service, Apache. We need only tell the cluster where to find its main configuration file and restrict it to running on nodes that have the required filesystem mounted and the IP address active.
[root@pcmk-1 ~]# pcs resource show WebSite-clone Clone: WebSite-clone Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-timeout-40s) stop interval=0s timeout=60s (WebSite-stop-timeout-60s) monitor interval=1min (WebSite-monitor-interval-1min) [root@pcmk-1 ~]# pcs constraint ref WebSite-clone Resource: WebSite-clone colocation-WebSite-ClusterIP-INFINITY colocation-WebSite-WebFS-INFINITY order-ClusterIP-WebSite-mandatory order-WebFS-WebSite-mandatory
Sample Corosync Configuration
Sample corosync.conf
for two-node cluster created by pcs
.
totem { version: 2 secauth: off cluster_name: mycluster transport: udpu } nodelist { node { ring0_addr: pcmk-1 nodeid: 1 } node { ring0_addr: pcmk-2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_syslog: yes }
延伸阅读
-
Project Website http://www.clusterlabs.org/
-
SuSE has a comprehensive guide to cluster commands (though using the
crmsh
command-line shell rather thanpcs
) at: https://www.suse.com/documentation/sle_ha/book_sleha/data/book_sleha.html -
Corosync http://www.corosync.org/
修订历史
修订 1-0 | Mon May 17 2010 | Andrew Beekhof |
| ||
修订 2-0 | Wed Sep 22 2010 | Raoul Scarazzini |
| ||
修订 3-0 | Wed Feb 9 2011 | Andrew Beekhof |
| ||
修订 4-0 | Wed Oct 5 2011 | Andrew Beekhof |
| ||
修订 5-0 | Fri Feb 10 2012 | Andrew Beekhof |
| ||
修订 6-0 | Tues July 3 2012 | Andrew Beekhof |
| ||
修订 7-0 | Fri Sept 14 2012 | David Vossel |
| ||
修订 8-0 | Mon Jan 05 2015 | Ken Gaillot |
| ||
修订 8-1 | Thu Jan 08 2015 | Ken Gaillot |
| ||
修订 9-0 | Fri Aug 14 2015 | Ken Gaillot |
|
索引
符号
/server-status,开启 Apache status URL
A
Apache HTTP Server,Add Apache HTTP Server as a Cluster Service
/server-status,开启 Apache status URL
Apache resource configuration,Configure the Cluster
Apache resource configuration,Configure the Cluster
C
Creating and Activating a new SSH Key,配置SSH
D
Domain name (Query),Use Short Node Names
Domain name (Remove from host name),Use Short Node Names
F
feedback
contact information for this manual,We Need Feedback!
N
Nodes
Domain name (Query),Use Short Node Names
Domain name (Remove from host name),Use Short Node Names
short name,Use Short Node Names
S
short name,Use Short Node Names
SSH,配置SSH