赞
踩
授权:chmod 777 kafka_2.12-3.1.0.tgz
解压:tar -zxvf kafka_2.12-3.1.0.tgz
重命名:mv kafka_2.12-3.1.0 kafka
cd kafka/config/
vim server.properties
节点ID,全局唯一:broker.id=0
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://192.168.56.177:9092
数据存储目录:log.dirs=/home/pengbiao/kafka/datas
Zookeeper连接集群地址:
zookeeper.connect=hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka
按Esc退出键,输入:wq保存并退出
vim zookeeper.properties
数据存储目录:dataDir=/home/pengbiao/kafka/zookeeper
按Esc退出键,输入:wq保存并退出
vim /etc/profile
#Kafka环境变量
export KAFKA_HOME=/home/pengbiao/kafka
export PATH=$PATH:$KAFKA_HOME/bin
Jdk的安装省略,环境变量如下:
export JAVA_HOME=/home/pengbiao/jdk-11
export JRE_HOME=/home/pengbiao/jdk-11/jre
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
刷新环境变量:source /etc/profile
用户根目录创建bin目录:mkdir bin
cd bin
vim xsync
#! /bin/bash
#1 获取输入参数个数,如果没有参数,直接退出
pcount=$#
if [ $pcount-lt 1 ]
then
echo No Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in hadoop002 hadoop003
do
echo ==================== $host ====================
#3. 递归遍历所有目录
for file in $@
do
#4 判断文件是否存在
if [ -e $file ]
then
#5. 获取全路径
pdir=$(cd -P $(dirname $file); pwd)
echo pdir=$pdir
#6. 获取当前文件的名称
fname=$(basename $file)
echo fname=$fname
#7. 通过ssh执⾏命令:在$host主机上递归创建文件夹(如果存在该文件夹)
ssh $host "source /etc/profile;mkdir -p $pdir"
#8. 远程同步文件到$host主机的$USER用户的$pdir文件夹下
rsync -av $pdir/$fname $USER@$host:$pdir
else
echo $file Does Not Exists!
fi
done
done
按Esc退出键,输入:wq保存并退出
修改xsync权限:chmod 777 xsync
修改/etc/hosts文件:vim /etc/hosts
分发文件到集群的服务器上(前提各服务器都安装ssh并启动):
返回用户根目录:cd ~
分发jdk-11到集群服务器:xsync jdk-11
分发Kafka到集群服务器:xsync kafka
注:需修改集群各服务器上环境变量/etc/profile、域名解析/etc/hosts文件和Kafka的server.properties文件(详见上面步骤)
进入每台服务器的kafka安装目录:cd kafka/
启动Zookeeper:bin/zookeeper-server-start.sh –daemon config/zookeeper.properties
启动Kafka:bin/kafka-server-start.sh -daemon config/server.properties
关闭Zookeeper:bin/zookeeper-server-stop.sh
关闭Kafka:bin/kafka-server-stop.sh
查看进程:jps
进入用户根目录的bin目录:cd bin/
创建并编辑Zookeeper启动文件:vim zk.sh
#!/bin/bash
case $1 in
"start")
for i in hadoop002 hadoop003
do
echo "--- 启动 $i zookeeper ---"
ssh $i "/home/pengbiao/kafka/bin/zookeeper-server-start.sh -daemon /home/pengbiao/kafka/config/zookeeper.properties"
done
;;
"stop")
for i in hadoop002 hadoop003
do
echo "--- 停止 $i zookeeper ---"
ssh $i "/home/pengbiao/kafka/bin/zookeeper-server-stop.sh"
done
;;
Esac
创建并编辑Kafka启动文件:vim kf.sh
#!/bin/bash
case $1 in
"start")
for i in hadoop002 hadoop003
do
echo "--- 启动 $i Kafka ---"
ssh $i "/home/pengbiao/kafka/bin/kafka-server-start.sh -daemon /home/pengbiao/kafka/config/server.properties"
done
;;
"stop")
for i in hadoop002 hadoop003
do
echo "--- 停止 $i Kafka ---"
ssh $i "/home/pengbiao/kafka/bin/kafka-server-stop.sh"
done
;;
Esac
授权:chmod 777 zk.sh
chmod 777 kf.sh
启动Zookeeper:zk.sh start
启动Kafka:kf.sh start
关闭Zookeeper:zk.sh stop
关闭Kafka:kf.sh stop
左图为kafka原有架构,元数据在zookeeper中,运行时动态选举controller,controller进行kafka集群管理。右图为kraft模式架构(实验性,2.8版本以后),不再依赖zookeeper集群,而是用三台controller节点代替zookeeper,元数据保存在controller中,由controller直接进行 kafka集群管理。这样做的好处有以下几个:
vim server.properties
集群节点ID(全局唯一):node.id=1
Controller管理集群配置:
controller.quorum.voters=1@hadoop002:9093,2@hadoop003:9093
listeners=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
inter.broker.listener.name=PLAINTEXT
advertised.listeners=PLAINTEXT://hadoop002:9092
数据存储目录:log.dirs=/home/pengbiao/kafka/data
按Esc退出键,输入:wq保存并退出
bin/kafka-storage.sh random-uuid
bin/kafka-storage.sh format -t Wc2IGWeLRfu6Fae-D69_5A -c /home/pengbiao/kafka2/config/kraft/server.properties
bin/kafka-server-start.sh -daemon config/kraft/server.properties
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。