当前位置:   article > 正文

选它准没错,Kafka可视化监控之Eagle_kafka eagle 3.0.1 alert

kafka eagle 3.0.1 alert

Kafka-Eagle框架可以监控Kafka集群的整体运行情况,在生产环境中经常使用。

一、MySQL环境准备

Kafka-Eagle的安装依赖于MySQL,MySQL主要用来存储可视化展示的数据。如果集群中之前安装过MySQL可以跨过该步。

二、Kafka环境准备

关闭Kafka集群

[bigdata@hadoop102 kafka]$ kf.sh stop

在/home/bigdata/bin目录下kf.sh脚本文件如下:

#! /bin/bash

case $1 in

"start"){

for i in hadoop102 hadoop103 hadoop104

do

echo " --------启动 $i Kafka-------"

ssh $i "/opt/module/kafka_2.12-3.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.12-3.0.0/config/server.properties"

done

};;

"stop"){

for i in hadoop102 hadoop103 hadoop104

do

echo " --------停止 $i Kafka-------"

ssh $i "/opt/module/kafka_2.12-3.0.0/bin/kafka-server-stop.sh "

done

};;

esac

修改/opt/module/kafka/bin/kafka-server-start.sh

[bigdata@hadoop102 kafka]$ vim bin/kafka-server-start.sh

修改如下参数值:

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

fi

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then

export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"

export JMX_PORT="9999"

#export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

fi

注意:修改之后在启动Kafka之前要分发到其他节点。

[bigdata@hadoop102 bin]$ xsync kafka-server-start.sh

在/home/bigdata/bin目录下xsync文件代码如下:

#!/bin/bash

#1. 判断参数个数

if [ $# -lt 1 ]

then

echo Not Enough Arguement!

exit;

fi

#2. 遍历集群所有机器

for host in hadoop102 hadoop103 hadoop104

do

echo ==================== $host ====================

#3. 遍历所有目录,挨个发送

for file in $@

do

#4. 判断文件是否存在

if [ -e $file ]

then

#5. 获取父目录

pdir=$(cd -P $(dirname $file); pwd)

#6. 获取当前文件的名称

fname=$(basename $file)

ssh $host "mkdir -p $pdir"

rsync -av $pdir/$fname $host:$pdir

else

echo $file does not exists!

fi

done

done

三、Kafka-Eagle安装

官网:https://www.kafka-eagle.org/

上传压缩包kafka-eagle-bin-2.0.8.tar.gz到集群 /opt/software目录

解压到本地

[bigdata@hadoop102 software]$ tar -zxvf kafka-eagle-bin-2.0.8.tar.gz

进入刚才解压的目录

[bigdata@hadoop102 kafka-eagle-bin-2.0.8]$ ll

总用量 79164

-rw-rw-r--. 1 bigdata bigdata 81062577 10月 13 00:00 efak-web-2.0.8-bin.tar.gz

将efak-web-2.0.8-bin.tar.gz解压至/opt/module

[bigdata@hadoop102 kafka-eagle-bin-2.0.8]$ tar -zxvf efak-web-2.0.8-bin.tar.gz -C /opt/module/

修改名称

[bigdata@hadoop102 module]$ mv efak-web-2.0.8/ efak

修改配置文件 /opt/module/efak/conf/system-config.properties

[bigdata@hadoop102 conf]$ vim system-config.properties

######################################

# multi zookeeper & kafka cluster list

# Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead

######################################

efak.zk.cluster.alias=cluster1

cluster1.zk.list=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka

######################################

# zookeeper enable acl

######################################

cluster1.zk.acl.enable=false

cluster1.zk.acl.schema=digest

cluster1.zk.acl.username=test

cluster1.zk.acl.password=test123

######################################

# broker size online list

######################################

cluster1.efak.broker.size=20

######################################

# zk client thread limit

######################################

kafka.zk.limit.size=32

######################################

# EFAK webui port

######################################

efak.webui.port=8048

######################################

# kafka jmx acl and ssl authenticate

######################################

cluster1.efak.jmx.acl=false

cluster1.efak.jmx.user=keadmin

cluster1.efak.jmx.password=keadmin123

cluster1.efak.jmx.ssl=false

cluster1.efak.jmx.truststore.location=/data/ssl/certificates/kafka.truststore

cluster1.efak.jmx.truststore.password=ke123456

######################################

# kafka offset storage

######################################

# offset保存在kafka

cluster1.efak.offset.storage=kafka

######################################

# kafka jmx uri

######################################

cluster1.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi

######################################

# kafka metrics, 15 days by default

######################################

efak.metrics.charts=true

efak.metrics.retain=15

######################################

# kafka sql topic records max

######################################

efak.sql.topic.records.max=5000

efak.sql.topic.preview.records.max=10

######################################

# delete kafka topic token

######################################

efak.topic.token=keadmin

######################################

# kafka sasl authenticate

######################################

cluster1.efak.sasl.enable=false

cluster1.efak.sasl.protocol=SASL_PLAINTEXT

cluster1.efak.sasl.mechanism=SCRAM-SHA-256

cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle";

cluster1.efak.sasl.client.id=

cluster1.efak.blacklist.topics=

cluster1.efak.sasl.cgroup.enable=false

cluster1.efak.sasl.cgroup.topics=

cluster2.efak.sasl.enable=false

cluster2.efak.sasl.protocol=SASL_PLAINTEXT

cluster2.efak.sasl.mechanism=PLAIN

cluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-eagle";

cluster2.efak.sasl.client.id=

cluster2.efak.blacklist.topics=

cluster2.efak.sasl.cgroup.enable=false

cluster2.efak.sasl.cgroup.topics=

######################################

# kafka ssl authenticate

######################################

cluster3.efak.ssl.enable=false

cluster3.efak.ssl.protocol=SSL

cluster3.efak.ssl.truststore.location=

cluster3.efak.ssl.truststore.password=

cluster3.efak.ssl.keystore.location=

cluster3.efak.ssl.keystore.password=

cluster3.efak.ssl.key.password=

cluster3.efak.ssl.endpoint.identification.algorithm=https

cluster3.efak.blacklist.topics=

cluster3.efak.ssl.cgroup.enable=false

cluster3.efak.ssl.cgroup.topics=

######################################

# kafka sqlite jdbc driver address

######################################

# 配置mysql连接

efak.driver=com.mysql.jdbc.Driver

efak.url=jdbc:mysql://hadoop102:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull

efak.username=root

efak.password=root

######################################

# kafka mysql jdbc driver address

######################################

#efak.driver=com.mysql.cj.jdbc.Driver

#efak.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull

#efak.username=root

#efak.password=123456

添加环境变量

[bigdata@hadoop102 conf]$ sudo vim /etc/profile.d/my_env.sh

# kafkaEFAK

export KE_HOME=/opt/module/efak

export PATH=$PATH:$KE_HOME/bin

注意:source /etc/profile

[bigdata@hadoop102 conf]$ source /etc/profile

启动

注意:启动之前需要先启动ZK以及KAFKA。

[bigdata@hadoop102 kafka]$ kf.sh start

启动efak

[bigdata@hadoop102 efak]$ bin/ke.sh start

Version 2.0.8 -- Copyright 2016-2021

*****************************************************************

* EFAK Service has started success.

* Welcome, Now you can visit 'http://192.168.10.102:8048'

* Account:admin ,Password:123456

*****************************************************************

* <Usage> ke.sh [start|status|stop|restart|stats] </Usage>

* <Usage> https://www.kafka-eagle.org/ </Usage>

*****************************************************************

说明:如果停止efak,执行命令。

[bigdata@hadoop102 efak]$ bin/ke.sh stop

四、Kafka-Eagle页面操作

登录页面查看监控数据

http://192.168.10.102:8048/

图片

图片

图片

五、总结

Kafka Eagle 提供了完善的管理页面,很方便的去管理和可视化Kafka集群的一些信息,例如Broker详情、性能指标趋势、Topic集合、消费者信息等,选它准没错,赶快安装试试吧。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/413257
推荐阅读
相关标签
  

闽ICP备14008679号