赞
踩
实际项目中发送MQ消息,如果不做集群,其中mq机器出了故障宕机了,那么mq消息就不能发送了,系统就崩溃了,所以我们需要集群MQ,当其中一台MQ出了故障,其余的MQ机器可以接着继续运转,所以这里描述一下如何使用ZooKeeper来进行ActiveMQ的高可用集群。
注意:关闭防火墙
前提条件:
准备三个服务器节点用来安装部署Zookeeper和ActiveMQ,我的三个节点分别为:192.168.1.130;192.168.1.163;192.168.1.165
安装ZK和ActiveMQ:
在各个节点上下载ZK,然后解压即可
wget http://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
tar -zxvf zookeeper-3.4.6.tar.gz
在各个节点上下载ActiveMQ,然后解压即可
wget http://apache.fayea.com/activemq/5.11.1/apache-activemq-5.11.1-bin.tar.gz
tar -xf ./apache-activemq-5.11.1-bin.tar.gz
这是常规的下载和安装操作,我这里使用ansible自动化部署到节点上去,比较轻松便捷。
#centos_docker ansible_ssh_host=192.168.1.222 ansible_ssh_user=root
centos_svr ansible_ssh_host=192.168.1.130 ansible_ssh_user=root zookeeper_id=1
centos_svr1 ansible_ssh_host=192.168.1.163 ansible_ssh_user=root zookeeper_id=2
centos_svr2 ansible_ssh_host=192.168.1.165 ansible_ssh_user=root zookeeper_id=3
#ubuntu_svr ansible_ssh_host=192.168.1.236 ansible_ssh_user=root zookeeper_id=2
#debian_svr ansible_ssh_host=192.168.1.237 ansible_ssh_user=root zookeeper_id=3
[nginx]
centos_svr
#ubuntu_svr
#debian_svr
[nginx:vars]
[etcd]
centos_svr
#ubuntu_svr
#debian_svr
[etcd:vars]
[redis]
centos_svr
#ubuntu_svr
#debian_svr
[redis:vars]
[zookeeper]
centos_svr
#ubuntu_svr
#debian_svr
centos_svr1
centos_svr2
[activemq]
centos_svr
[zookeeper:vars]
zookeeper_hosts='1:192.168.1.130,2:192.168.1.163,3:192.168.1.165'
[docker-build]
centos_svr
#ubuntu_svr
#debian_svr
[docker-build:vars]
docker_build_force=False
#docker_instance_using_exist_data=True
#docker_instance_sshport='2022'
#zookeeper_hosts='1:192.168.1.130,2:192.168.1.163,3:192.168.1.165'
[all-group:children]
nginx
etcd
redis
zookeeper
docker-build
[all-group:vars]
#NexusRepoHost=192.168.1.249
NexusRepoHost=linux.xxxxx.com.cn
#NexusRepoHost=nexus.clouedu.com
NexusRepoPort=8081
docker_build_host_global_vars={ 'NexusRepoHost': '{{ NexusRepoHost }}', 'NexusRepoPort': '{{ NexusRepoPort }}' }
先配置ZK集群:
配置环境变量:
[root@localhost ~]# vi /etc/profile
添加如下:
export ZOOKEEPER_HOME=/opt/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf
先配置第一台ZK,编辑ZK安装目录中conf目录下的zoo.cfg文件,如果没有就复制一份zoo.simple.cfg改名为zoo.cfg继续配置
[root@localhost zookeeper]# vi conf/zoo.cfg
然后添加下面的配置
# the directory where the snapshot is stored.
dataDir=/opt/zookeeper/data
# Place the dataLogDir to a separate physical disc for better performance
dataLogDir=/opt/zookeeper/log
...............
server.1=0.0.0.0:2888:3888
server.2=192.168.1.163:2888:3888
server.3=192.168.1.165:2888:3888
然后在data目录下创建mypid文件然后内容写入1保存退出。
配置第二台ZK,步骤和上述的一样。
# the directory where the snapshot is stored.
dataDir=/opt/zookeeper/data
# Place the dataLogDir to a separate physical disc for better performance
dataLogDir=/opt/zookeeper/log
...............
server.1=192.168.1.130:2888:3888
server.2=0.0.0.0:2888:3888
server.3=192.168.1.165:2888:3888
然后在data目录下创建mypid文件然后内容写入2保存退出。
第三台ZK配置:
# the directory where the snapshot is stored.
dataDir=/opt/zookeeper/data
# Place the dataLogDir to a separate physical disc for better performance
dataLogDir=/opt/zookeeper/log
...............
server.1=192.168.1.130:2888:3888
server.2=192.168.1.163:2888:3888
server.3=0.0.0.0:2888:3888
然后在data目录下创建mypid文件然后内容写入3保存退出。
ZK配置好了之后,按顺序依次启动ZK
启动zk:
zkServer.sh start
查看状态:
zkServer.sh status
成功集成后三台ZK的状态依次如下:
[root@localhost ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@localhost ~]#
[root@localhost ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@localhost ~]#
[root@localhost ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@localhost ~]#
如果有异常,需等待一会,因为他们3台之间需要互相联通,三台没有完全起来的时候会出现异常,等三台ZK通过选举算法选出Leader之后,其他的两台处于follower的时候就正常了。
看到了上述结果,则表示ZK集成成功,下面来看如何集成ActiveMQ。
在3个节点上安装ActiveMQ,安装方法和ZK是一样的,这里不重复说。
三个节点的MQ配置如下上配置如下:
[root@localhost activemq]# vi conf/activemq.xml
首先修改brokerName,三台MQ的需要修改成一样的
brokerName="activemq-cluster"
在修改persistenceAdapter,三台机器都一样的改成如下,但是hostname需要改成节点的ip, 为了避免冲突把,bind的端口号依次改成62621–62623
<persistenceAdapter>
<!--kahaDB directory="${activemq.data}/kahadb"/ -->
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:62621"
zkAddress="192.168.1.130:2181,
192.168.1.163:2181,
192.168.1.165:2181"
hostname="192.168.1.130"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
为了避免端口号冲突,把openwire的端口号依次改为51515,51516,51517
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:51515?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
修改jetty.xml
[root@localhost activemq]# vi conf/jetty.xml
依次把三台MQ的控制管理界面的端口号依次修改为8161、8162、8163。
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8161"/>
</bean>
最后依次启动MQ。在浏览器中输入管理界面地址,你会发现只有一台MQ的管理界面可以访问,那么这台机器就是Master,其他两台是slave,不会启动,等Master机器宕机后,另外两台会选出一台当做Master启动继续发送消息,这就能实现高复用。下面来看测试代码:
ActiveMQ连接工厂配置:
<!-- Activemq 连接工厂 -->
<bean id="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<constructor-arg value="system" />
<constructor-arg value="manager" />
<constructor-arg value="failover:(tcp://192.168.1.130:51515,tcp://192.168.1.163:51516,tcp://192.168.1.165:51517)?Randomize=false" />
</bean>
消息发送客户端类:
package com.wind.client;
import javax.jms.Destination;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.jms.core.JmsTemplate;
import com.wind.sender.MessageSender;
public class MessageClient {
public static void main(String[] args) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("applicationContext.xml");
JmsTemplate jmsTemplate = (JmsTemplate) applicationContext.getBean("jmsTemplate");
final Destination destination = (Destination) applicationContext.getBean("destination");
System.out.println("连上 "+destination.toString());
final MessageSender messageSender = new MessageSender(destination, jmsTemplate);
new Thread(new Runnable() {
public void run() {
for (int i = 0; i < 10000; i++) {
// TODO Auto-generated method stub
try {
Thread.sleep(1000);
messageSender.sendMessageByTxt("ActiveMQ "+i);
System.out.println("发送消息: ActiveMQ "+i);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}).start();
}
}
消息发送类:
package com.wind.sender;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.core.MessageCreator;
public class MessageSender {
private Destination destination;
private JmsTemplate jmsTemplate;
public MessageSender(Destination destination, JmsTemplate jmsTemplate) {
this.destination = destination;
this.jmsTemplate = jmsTemplate;
}
public void sendMessage(final String txt)
{
jmsTemplate.send(destination,new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
Message message = session.createTextMessage(txt);
return message;
}
});
}
public void sendMessageByTxt(String tx)
{
jmsTemplate.setDefaultDestination(destination);
jmsTemplate.convertAndSend(tx);
}
}
消息接收类:
package com.wind.reciever;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;
public class MessageReciever implements MessageListener {
public void onMessage(Message message) {
// TODO Auto-generated method stub
if (message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
try {
String text = textMessage.getText();
System.out.println("收到消息: " + text);
} catch (JMSException e) {
e.printStackTrace();
}
}
}
测试效果:
让程序运行起来,正常的收发消息,然后通过
# ps axu | grep activemq
# kill -9 xxx
把那台master MQ机器(192.168.1.130.51515)停止MQ,差看效果:
再把重新起来的那台master MQ机器(192.168.1.163.51516)停止MQ,差看效果:
发现每次会接着消息继续发送,这就实现了可高用。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。