当前位置:   article > 正文

Kafka集群部署(含开机自启动)_kafuka集群部署并配置开机自启动

kafuka集群部署并配置开机自启动

1.Zookeeper集群部署

1.1 下载安装包并解压

官网下载所需版本,将下载好后的安装包上传到服务器任一节点上,并解压

[root@kafka-01 ~]# tar -zxf zookeeper-3.5.5.tar.gz -C /opt/software/
[root@kafka-01 ~]# cd /opt/software/zookeeper-3.5.5
  • 1
  • 2

1.2 修改配置文件

[root@kafka-01 zookeeper-3.5.5]# cp conf/zoo_sample.cfg conf/zoo.cfg
[root@kafka-01 zookeeper-3.5.5]# vim conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/var/log/zookeeper/zkData
dataLogDir=/var/log/zookeeper/zkLog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
#这改成你自己的主机名称或者IP地址,别傻了吧唧复制粘贴也不改的,这句注释也别留在配置文件里的
server.1=kafka-01:2888:3888
server.2=kafka-02:2888:3888
server.3=kafka-03:2888:3888
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

dataDir是Zookeeper保存数据的目录,默认情况下Zookeeper将写数据的日志文件也保存在这个目录里;

clientPort是客户端连接Zookeeper服务器的端口,Zookeeper会监听这个端口接受客户端的访问请求;

server.A=B:C:D中的A是一个数字,表示这个是第几号服务器,B是这个服务器的IP地址,C用来集群成员的信息交换,表示这个服务器与集群中的leader服务器交换信息的端口,D是在leader挂掉时专门用来进行选举leader所用的端口。

1.3 创建存储日志目录

每个ZK节点都要创建,创建的名称和配置文件里的得对应

[root@kafka-01 conf]# mkdir -p /vat/log/zookeeper/{zkData,zkLog}
  • 1

1.4 创建serverID标识

在server.1执行

echo "1" >/opt/zookeeper/zkData/myid
  • 1

在server.2执行

echo "2" >/opt/zookeeper/zkData/myid
  • 1

在server.3执行

echo "3" >/opt/zookeeper/zkData/myid
  • 1

以此类推

2.Kafka集群部署

2.1 下载安装包并解压

官网下载所需版本,将下载好后的安装包上传到服务器任一节点上,并解压

[root@kafka-01 ~]# tar -zxf kafka_2.11-1.1.0.tgz -C /opt/software/
[root@kafka-01 ~]# cd /opt/software/kafka_2.11-1.1.0
[root@kafka-01 kafka_2.11-1.1.0]# ll config
total 64
-rw-r--r-- 1 root root  906 Mar 24  2018 connect-console-sink.properties
-rw-r--r-- 1 root root  909 Mar 24  2018 connect-console-source.properties
-rw-r--r-- 1 root root 5807 Mar 24  2018 connect-distributed.properties
-rw-r--r-- 1 root root  883 Mar 24  2018 connect-file-sink.properties
-rw-r--r-- 1 root root  881 Mar 24  2018 connect-file-source.properties
-rw-r--r-- 1 root root 1111 Mar 24  2018 connect-log4j.properties
-rw-r--r-- 1 root root 2730 Mar 24  2018 connect-standalone.properties
-rw-r--r-- 1 root root 1221 Mar 24  2018 consumer.properties
-rw-r--r-- 1 root root 4727 Mar 24  2018 log4j.properties
-rw-r--r-- 1 root root 1919 Mar 24  2018 producer.properties
-rw-r--r-- 1 root root 6890 May 13 10:15 server.properties
-rw-r--r-- 1 root root 1032 Mar 24  2018 tools-log4j.properties
-rw-r--r-- 1 root root 1023 Mar 24  2018 zookeeper.properties
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

2.2 修改配置文件

演示只是修改了部分配置信息,可根据需求添加特定的配置信息,配置信息参考Kafka Configuration

[root@kafka-01 kafka_2.11-1.1.0]# vim config/server.properties

# The id of the broker. This must be set to a unique integer for each broker.
#broker.id每个节点需要设置不同的值
broker.id=0
# A comma separated list of directories under which to store log files
#配置kafka日志存储路径,包括kafka的数据日志,要是有多个目录用逗号间隔就行
log.dirs=/var/log/kafka
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# 配置ZK集群的IP:PORT
zookeeper.connect=kafka-01:2181,kafka-02:2181,kafka-03:2181
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

2.3 分发到各个节点

同步方式很多,为了演示用scp分发,也可以使用其他的方式

[root@kafka-01 kafka_2.11-1.1.0]# scp -r /opt/software/kafka_2.11-1.1.0 root@kafka-02:/opt/software
[root@kafka-01 kafka_2.11-1.1.0]# scp -r /opt/software/kafka_2.11-1.1.0 root@kafka-03:/opt/software
  • 1
  • 2

3.配置ZK开机自启动

每个zk节点都要执行以下的操作

3.1 修改Zookeeper启动脚本

开启ZK四字命令(Four Letter Words),有两种方法,任选一种即可
第一种方法:在启动脚本中添加ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}",需要注意添加位置确保加载顺序,我添加的位置只是一个参考
第二种方法:在zoo.cfg中添加4lw.commands.whitelist=*

[root@kafka-01 ~]# vim /opt/software/zookeeper-3.5.5/bin/zkServer.sh 

    fi
    if [ "x$JMXLOG4J" = "x" ]
    then
      JMXLOG4J=true
    fi
    echo "ZooKeeper remote JMX Port set to $JMXPORT" >&2
    echo "ZooKeeper remote JMX authenticate set to $JMXAUTH" >&2
    echo "ZooKeeper remote JMX ssl set to $JMXSSL" >&2
    echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2
    ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
  fi
else
    echo "JMX disabled by user request" >&2
    ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
#新增变量
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"
if [ "x$SERVER_JVMFLAGS" != "x" ]
then
    JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS"
fi

if [ "x$2" != "x" ]
then
    ZOOCFG="$ZOOCFGDIR/$2"
fi

# if we give a more complicated path to the config, don't screw around in $ZOOCFGDIR
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

3.2 添加Zookeeper服务

注意根据自己的安装位置和版本修改,别傻了吧唧的复制粘贴

[root@kafka-01 conf]# vim /etc/init.d/zookeeper 

#!/bin/bash

export JAVA_HOME=/opt/software/jdk1.8.0_161/
export PATH=$JAVA_HOME/bin:$PATH

#chkconfig:2345 20 90
#description:zookeeper
#processname:zookeeper

case $1 in
        start)
        /opt/software/zookeeper-3.5.5/bin/zkServer.sh start
        ;;
        stop)
        /opt/software/zookeeper-3.5.5/bin/zkServer.sh stop
        ;;
        status)
        /opt/software/zookeeper-3.5.5/bin/zkServer.sh status
        ;;
        restart)
        /opt/software/zookeeper-3.5.5/bin/zkServer.sh restart
        ;;
        *)
        echo "require start|stop|status|restart"
        ;;
esac
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

3.2 修改zookeeper文件权限

[root@kafka-01 conf]# chmod 755 /etc/init.d/zookeeper
  • 1

3.3 添加到服务列表

[root@kafka-01 conf]# chkconfig --add zookeeper
  • 1

3.4 设置为开机自启动

[root@kafka-01 conf]# chkconfig zookeeper on
  • 1

4.配置Kafka开机自启动

该自启动方法为早期版本需要节点间提前配置免密,并且该方法存在单点故障,如果自启动的管理节点宕机则无法实现Kafka集群自启动,并且脚本中使用的都是绝对路径,有很大的优化空间

4.1 检查ZK状态脚本

创建检查zookeeper集群健康状态脚本,Kafka集群启动需要确保Zookeeper服务已经启动了才能正常启动,所以在Kafka集群启动前先检查Zookeeper集群状态

[root@kafka-01 ~]# vim AutoStartKafka/zkClusterCheck.sh

#!/bin/bash
#
# Check Zookeeper cluster status
# Author:fyb
# Date:2018/12/14
#

zkHosts="kafka-01 kafka-02 kafka-03"
function getstatus(){
exec 8<>/dev/tcp/$1/2181 || continue
echo stat >&8
MODE=`cat <&8 |grep -Po "(?<=Mode: ).*"`
exec 8<&-
echo ${MODE}
}

for i in $zkHosts;do
echo -ne "${i}:"
getstatus ${i}
done >/var/log/zookeeper/zkStatus.txt 2>/var/log/zookeeper/zkErrors.log
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

4.2 Kafka集群启动脚本

[root@kafka-01 ~]# vim AutoStartKafka/kafkaStart.sh

#!/bin/sh
#
# Get Zookeeper Status,if the zookeeper is started,start Kafka cluster
# Author:fyb
# Date:2018/12/14
#

export JAVA_HOME=/opt/software/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH
/root/AutoStartKafka/zkClusterCheck.sh
st=`cat /var/log/zookeeper/zkStatus.txt |grep leader`
#定义循环次数变量i
i=0
zkHosts="kafka-01 kafka-02 kafka-03"
while [ -z "$st" ]
do
   /root/AutoStartKafka/zkClusterCheck.sh
   st=`cat /tmp/zkStatus.txt |grep leader`
   sleep 1
   let i++
   if [$i -gt 15]; then
      for j in $zkHosts
      do
         ssh $j "source /etc/profile;/opt/software/zookeeper-3.5.5/bin/zkServer.sh start"
      done
      sleep 5
   fi
done
brokerHosts="kafka-01 kafka-02 kafka-03"
for k in $brokerHosts
do
   ssh $k "source /etc/profile;/opt/software/kafka_2.11-1.1.0/bin/kafka-server-start.sh -daemon /opt/software/kafka_2.11-1.1.0/config/server.properties >/var/log/kafka/KafkaStatus.txt"
done
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

4.3 Kafka集群停止脚本

[root@kafka-01 ~]# vim AutoStartKafka/kafkaStop.sh

#!/bin/sh
#!/bin/sh
#
# Stop Kafka cluster
# Author:fyb
# Date:2018/12/14
#

export JAVA_HOME=/opt/software/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH
brokerHosts="kafka-01 kafka-02 kafka-03"
for k in $brokerHosts
do
   ssh $k "source /etc/profile;/opt/software/kafka_2.11-1.1.0/bin/kafka-server-stop.sh"
done
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

4.4 添加权限

[root@kafka-01 AutoStartKafka]# chmod 755 zkClusterCheck.sh kafkaStart.sh kafkaStop.sh
  • 1

4.5 添加kafka服务

[root@kafka-01 ~]# vim /etc/init.d/kafka

#!/bin/bash

export JAVA_HOME=/opt/software/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH

#chkconfig:2345 20 90
#description:kafka
#processname:kafka

case $1 in
        start)
        /root/AutoStartKafka/kafkaStart.sh
        ;;
        stop)
        /root/AutoStartKafka/kafkaStop.sh
        ;;
        status)
        jps
        ;;
        restart)
        /root/AutoStartKafka/kafkaStop.sh
        /root/AutoStartKafka/kafkaStart.sh
        ;;
        *)
        echo "require start|stop|status|restart"
        ;;
esac
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

4.6 修改kafka文件权限

[root@kafka-01 ~]# chmod 755 /etc/init.d/kafka
  • 1

4.7 添加到服务列表

[root@kafka-01 ~]# chkconfig --add kafka
  • 1

4.8 设置为开机自启动

[root@kafka-01 ~]# chkconfig --add kafka
  • 1

4.9 检验

重启节点检验功能是否可行,也可以service kafka start检查

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/78556
推荐阅读
相关标签
  

闽ICP备14008679号