当前位置:   article > 正文

【运维知识大神篇】超详细的ELFK日志分析教程10(kafka集群原理+基本使用+zookeeper和kafka堆内存调优+kafka监控和压力测试+filebeat和logstash对接kafka)

elfk

本篇文章继续给大家介绍ELFK日志分析,我们先前介绍了ELFK架构,zookeeper部署使用,kafka的部署,仅差kafka使用就将整个体系融汇贯通了。我们本篇文章将以kafka为核心,详细介绍kafka使用,最终将kafka融入ELFK架构中,大致内容见下面目录。

目录

kafka集群原理

一、专业术语

二、为什么kafka会丢数据

kafka集群基本使用

一、启动kafka

二、topic管理

三、生产者和消费者

四、消费者组管理

zookeeper堆内存调优

kafka堆内存调优

kafka开源监控组件kafka-eagle

一、前期准备

二、部署监控

kafka集群压力测试

filebeat对接kafka

一、filebeat作为生产者

二、filebeat作为消费者

logstash对接kafka

一、logstash作为生产者

二、logstash作为消费者


kafka集群原理

kafka学习使用中涉及许多原理,了解这些原理会让学习事半功倍。

一、专业术语

kafka cluster是分布式消息传递系统,与MQ cluster(消息队列)类似,由broker list(kafka运行的节点)和多个topic(主题)组成,在Kafka中,每个topic被细分为多个partition,而每个partition又可以被副本到多个kafka broker上实现高可用性。因此,kafka cluster是由多个broker节点和多个topic partition组成的。

producer是生产者角色,主要负责生产数据,是向kafka cluster写入数据的一方,数据的写入有两种常见策略,要么是rr算法,要么是基于key的hash值和分区数取余。

consumer是消费者角色,主要是负责消费数据,是从kafka cluster拉取数据的一方。

topic是主题,是数据存储的逻辑单元。

replica是副本,实际存储数据的地方,一个topic最少要有一个副本。

partition是分区,一个topic最少要有一个分区,正常情况下有多个分区编号的。副本是分区的实际载体。

consumer group是消费者组,一个消费者组里面最少有一个消费者,同一个消费者组的消费者不能同时消费同一个topic的partition,以免造成数据重复消费;当一个消费者组的消费者数据发生变化时,会触发rebalance(重平衡)机制,即重新分配分区消费。

ISR是和leader数据同步的所有副本集合

OSR是和leader数据不同步的所有副本集合

AR是ISR和OSR的集合,就是所有的副本集合

二、为什么kafka会丢数据

假如30秒内leader和follower数据的LEO(Log End Offset)一致,则认为数据一致,当follower的数据还没有与leader完全同步时,leader节点宕机了,此时,follower选举出新的leader,其他的follower会跟随这个leader数据继续工作,如果之前leader恢复了,会从之前的HW(高水位线,ISR中最后一个副本最小的LEO)开始重新写数据,与新的leader同步,之前follower没同步的就丢失了,若30秒内,副本没有和leader的LEO相同,会直接踢出ISR,进入OSR列表。

kafka集群基本使用

一、启动kafka

1、先启动zookeeper

  1. [root@ELK103 ~]# manager_zk.sh start
  2. 启动服务
  3. ========== elk101 zkServer.sh start ================
  4. Starting zookeeper ... STARTED
  5. ========== elk102 zkServer.sh start ================
  6. Starting zookeeper ... STARTED
  7. ========== elk103 zkServer.sh start ================
  8. Starting zookeeper ... STARTED
  9. [root@ELK103 ~]# manager_zk.sh status
  10. 查看状态
  11. ========== elk101 zkServer.sh status ================
  12. Client port found: 2181. Client address: localhost. Client SSL: false.
  13. Mode: follower
  14. ========== elk102 zkServer.sh status ================
  15. Client port found: 2181. Client address: localhost. Client SSL: false.
  16. Mode: leader
  17. ========== elk103 zkServer.sh status ================
  18. Client port found: 2181. Client address: localhost. Client SSL: false.
  19. Mode: follower

2、模仿zookeeper启动脚本写一个kafka启动脚本,并给予执行权限

  1. [root@ELK101 ~]# cat /usr/local/sbin/manager_kafka.sh
  2. #!/bin/bash
  3. #判断用户是否传参
  4. if [ $# -ne 1 ];then
  5. echo "无效参数,用法为: $0 {start|stop|restart|status}"
  6. exit
  7. fi
  8. #获取用户输入的命令
  9. cmd=$1
  10. #定义函数功能
  11. function kafkaManger(){
  12. case $cmd in
  13. start)
  14. echo "启动服务"
  15. remoteExecution start
  16. ;;
  17. stop)
  18. echo "停止服务"
  19. stopKafka stop
  20. ;;
  21. restart)
  22. echo "重启服务"
  23. remoteExecution restart
  24. ;;
  25. status)
  26. echo "查看状态"
  27. remoteExecution status
  28. ;;
  29. *)
  30. echo "无效参数,用法为: $0 {start|stop|restart|status}"
  31. ;;
  32. esac
  33. }
  34. #定义执行的命令
  35. function remoteExecution(){
  36. for (( i=101 ; i<=103 ; i++ )) ; do
  37. tput setaf 2
  38. echo ========== elk${i} kafaka-server-start.sh $1 ================
  39. tput setaf 9
  40. ssh elk${i} "kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties"
  41. done
  42. }
  43. function stopKafka(){
  44. for (( i=101 ; i<=103 ; i++ )) ; do
  45. tput setaf 2
  46. echo ========== elk${i} kafaka-server-stop.sh $1 ================
  47. tput setaf 9
  48. ssh elk${i} "kafka-server-stop.sh -daemon $KAFKA_HOME/config/server.properties"
  49. done
  50. }
  51. #调用函数
  52. kafkaManger
  53. [root@ELK101 ~]# chmod +x /usr/local/sbin/manager_kafka.sh
  54. [root@ELK101 ~]# ll /usr/local/sbin/manager_kafka.sh
  55. -rwxr-xr-x 1 root root 1323 Jun 5 11:29 /usr/local/sbin/manager_kafka.sh

3、通过启动脚本启动kafka

  1. [root@ELK101 ~]# manager_kafka.sh start
  2. 启动服务
  3. ========== elk101 kafaka-server-start.sh start ================
  4. ========== elk102 kafaka-server-start.sh start ================
  5. ========== elk103 kafaka-server-start.sh start ================

二、topic管理

1、增

  1. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --create --topic koten
  2. Created topic koten. #创建一个名为koten的topic
  3. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --create --topic koten-3 --partitions 3
  4. Created topic koten-3. #创建一个名为koten-3的topic,分区数为3
  5. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --create --topic koten-3-2 --partitions 3 --replication-factor 2
  6. Created topic koten-3-2. #创建一个名为koten-3-2的topic,分区数为3,副本数为2

2、查

  1. #查看topic列表
  2. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --list
  3. koten
  4. koten-3
  5. koten-3-2
  6. #查看所有topic的详细信息
  7. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --describe
  8. Topic: koten-3-2 TopicId: 1l4P-Tv_Q3aTasKediU3MQ PartitionCount: 3 ReplicationFactor: 2 Configs: segment.bytes=1073741824
  9. Topic: koten-3-2 Partition: 0 Leader: 103 Replicas: 103,102 Isr: 103
  10. Topic: koten-3-2 Partition: 1 Leader: 102 Replicas: 102,101 Isr: 102,101
  11. Topic: koten-3-2 Partition: 2 Leader: 101 Replicas: 101,103 Isr: 101,103
  12. Topic: koten TopicId: eXxgjWBySxe_WAx-fv83ZA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
  13. Topic: koten Partition: 0 Leader: 103 Replicas: 103 Isr: 103
  14. Topic: koten-3 TopicId: l7L3SY63QV-ayXOD46bViQ PartitionCount: 3 ReplicationFactor: 1 Configs: segment.bytes=1073741824
  15. Topic: koten-3 Partition: 0 Leader: 103 Replicas: 103 Isr: 103
  16. Topic: koten-3 Partition: 1 Leader: 102 Replicas: 102 Isr: 102
  17. Topic: koten-3 Partition: 2 Leader: 101 Replicas: 101 Isr: 101
  18. #查看指定topic的详细信息
  19. oot@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --describe --topic koten
  20. Topic: koten TopicId: eXxgjWBySxe_WAx-fv83ZA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
  21. Topic: koten Partition: 0 Leader: 103 Replicas: 103 Isr: 103

3、改

主分片数可以用命令行直接修改,副本数修改比较麻烦,需要用json格式,可以参考这个大神的链接:https://www.cnblogs.com/yinzhengjie/p/9808125.html

  1. #修改koten的topic分区为5
  2. [root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --alter --topic koten --partitions 5

4、删

[root@ELK101 ~]# kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --delete --topic koten

三、生产者和消费者

1、启动生产者

[root@ELK101 ~]# kafka-console-producer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic koten-3

2、启动消费者(在同一主机)

表示从最新的offset拉取数据

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.102:9092 --topic koten-3

表示从头开始拉取数据

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.102:9092 --topic koten-3 --from-beginning

四、消费者组管理

1、查看现有的消费者组

  1. [root@ELK101 ~]# kafka-consumer-groups.sh --bootstraerver 10.0.0.101:9092 --list
  2. console-consumer-24702
  3. console-consumer-58734
  4. console-consumer-41114

2、列出所有消费者组的详细信息,包括偏移量,消费者ID,LEO等信息

  1. [root@ELK101 ~]# kafka-consumer-groups.sh --bootstrap-server 10.0.0.101:9092 --describe --all-groups
  2. Consumer group 'console-consumer-24702' has no active members.
  3. Consumer group 'console-consumer-41114' has no active members.
  4. GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
  5. console-consumer-58734 koten-3 0 - 1 - console-consumer-f92294eb-383c-402b-9f9e-9a7ac5773b7d /10.0.0.101 console-consumer
  6. console-consumer-58734 koten-3 1 - 2 - console-consumer-f92294eb-383c-402b-9f9e-9a7ac5773b7d /10.0.0.101 console-consumer
  7. console-consumer-58734 koten-3 2 - 3 - console-consumer-f92294eb-383c-402b-9f9e-9a7ac5773b7d /10.0.0.101 console-consumer

3、查看内置topic的offset数据,了解即可,我这边运行没有显示内容

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic __consumer_offsets --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter"   --from-beginning

4、基于配置文件指定消费组

  1. [root@ELK101 ~]# cat $KAFKA_HOME/config/consumer.properties
  2. ......
  3. group.id=koten-consumer-group
  4. ......
  5. [root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092 --topic koten-topic --consumer.config $KAFKA_HOME/config/consumer.properties
  6. [2023-06-05 20:25:33,829] WARN [Consumer clientId=console-consumer, groupId=koten-consumer-group] Error while fetching metadata with correlation id 2 : {koten-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

5、基于命令行参数指定消费者组

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092 --topic koten-topic --consumer-property group.id=koten-consumer-group

zookeeper堆内存调优

生产环境建议调到2G~4G

1、查zookeeper的堆内存大小

  1. [root@ELK101 ~]# jmap -heap `jps | awk '/QuorumPeerMain/{print $1}'` | grep MaxHeapSize
  2. MaxHeapSize = 1048576000 (1000.0MB)
  3. [root@ELK101 ~]#

2、修改堆内存的大小

  1. [root@ELK101 ~]# cat > /koten/softwares/apache-zookeeper-3.8.1-bin/conf/java.env <<'EOF'
  2. #!/bin/bash
  3. # 指定JDK的按住路径
  4. export JAVA_HOME=/koten/softwares/jdk1.8.0_291
  5. # 指定zookeeper的堆内存大小
  6. export JVMFLAGS="-Xms128m -Xmx128m $JVMFLAGS"
  7. EOF

3、将配置文件同步到集群的其他zk节点上

  1. [root@ELK101 ~]# data_rsync.sh /koten/softwares/apache-zookeeper-3.8.1-bin/conf/java.env
  2. ===== rsyncing elk102: java.env =====
  3. 命令执行成功!
  4. ===== rsyncing elk103: java.env =====
  5. 命令执行成功!

4、重启zk集群,注意一定要重启后堆内存才生效,manager_zk脚本不好用,有时候停止了,进程还存在,需要手动挨个运行zkServer.sh stop

  1. [root@ELK101 ~]# manager_zk.sh restart
  2. 重启服务
  3. ========== elk101 zkServer.sh restart ================
  4. Stopping zookeeper ... STOPPED
  5. Starting zookeeper ... STARTED
  6. ========== elk102 zkServer.sh restart ================
  7. Stopping zookeeper ... STOPPED
  8. Starting zookeeper ... STARTED
  9. ========== elk103 zkServer.sh restart ================
  10. Stopping zookeeper ... STOPPED
  11. Starting zookeeper ... STARTED

5、验证堆内存

  1. [root@ELK102 ~]# jmap -heap `jps | awk '/QuorumPeerMain/{print $1}'` | grep MaxHeapSize
  2. MaxHeapSize = 268435456 (256.0MB)

kafka堆内存调优

1、查看堆内存大小

  1. [root@ELK101 ~]# jmap -heap `jps | awk '/Kafka/{print $1}'` | grep MaxHeapSize
  2. MaxHeapSize = 1073741824 (1024.0MB)

2、修改堆内存(生产环境5~6G最佳)

捎带启动了JXM端口

  1. [root@ELK101 ~]# cat `which kafka-server-start.sh`
  2. ......
  3. if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  4. # export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
  5. export KAFKA_HEAP_OPTS="-server -Xmx256M -Xms256M -
  6. XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200
  7. -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:Initiat
  8. ingHeapOccupancyPercent=70"
  9. export JMX_PORT="8888"
  10. fi
  11. ......

3、单点重启kafka,查看堆内存大小

  1. [root@ELK101 ~]# kafka-server-stop.sh
  2. [root@ELK101 ~]# kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
  3. [root@ELK101 ~]# jmap -heap `jps | awk '/Kafka/{print $1}'` | grep MaxHeapSize
  4. MaxHeapSize = 268435456 (256.0MB)

4、将启动脚本同步到其他节点

  1. [root@ELK101 ~]# data_rsync.sh `which kafka-server-start.sh`
  2. ===== rsyncing elk102: kafka-server-start.sh =====
  3. 命令执行成功!
  4. ===== rsyncing elk103: kafka-server-start.sh =====
  5. 命令执行成功!

5、其他节点重启kafka环境,查看堆内存是否生效

  1. [root@ELK101 ~]# manager_kafka.sh stop
  2. 停止服务
  3. ========== elk101 kafaka-server-stop.sh stop ================
  4. ========== elk102 kafaka-server-stop.sh stop ================
  5. ========== elk103 kafaka-server-stop.sh stop ================
  6. [root@ELK101 ~]# manager_kafka.sh start
  7. 启动服务
  8. ========== elk101 kafaka-server-start.sh start ================
  9. ========== elk102 kafaka-server-start.sh start ================
  10. ========== elk103 kafaka-server-start.sh start ================
  11. [root@ELK102 ~]# jmap -heap `jps | awk '/Kafka/{print $1}'` | grep MaxHeapSize
  12. MaxHeapSize = 268435456 (256.0MB)
  13. [root@ELK103 ~]# jmap -heap `jps | awk '/Kafka/{print $1}'` | grep MaxHeapSize
  14. MaxHeapSize = 268435456 (256.0MB)

kafka开源监控组件kafka-eagle

图形化的方式管理kafka

一、前期准备

1、启动kafka的JMX端口

与上面修改堆内存步骤相同,略。

2、启动zookeeper的JMX端口

  1. [root@ELK101 ~]# cat /koten/softwares/apache-zookeeper-3.8.1-bin/bin/zkEnv.sh
  2. # zookeeper JMX
  3. JMXLOCALONLY=false
  4. JMXHOSTNAME=10.0.0.101
  5. JMXPORT=9999
  6. JMXSSL=false
  7. JMXLOG4J=false

3、安装mysql,启动服务并设置开机自启动

  1. [root@ELK101 ~]# yum -y install mariadb-server
  2. [root@ELK101 ~]# cat /etc/my.cnf
  3. [mysqld]
  4. ......
  5. skip-name-resolve=1 #跳过名称解析,不跳过再进行登录的时候,可能会进行反向解析
  6. [root@ELK101 ~]# systemctl enable --now mariadb
  7. Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.

4、创建数据库与授权用户

  1. [root@ELK101 ~]# mysql
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is 2
  4. Server version: 5.5.68-MariaDB MariaDB Server
  5. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  6. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  7. MariaDB [(none)]> CREATE DATABASE koten_kafka DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
  8. Query OK, 1 row affected (0.00 sec)
  9. MariaDB [(none)]> CREATE USER admin IDENTIFIED BY 'koten';
  10. Query OK, 0 rows affected (0.00 sec)
  11. MariaDB [(none)]> GRANT ALL ON koten_kafka.* TO admin;
  12. Query OK, 0 rows affected (0.00 sec)
  13. MariaDB [(none)]> SHOW GRANTS FOR admin;
  14. +------------------------------------------------------------------------------------------------------+
  15. | Grants for admin@% |
  16. +------------------------------------------------------------------------------------------------------+
  17. | GRANT USAGE ON *.* TO 'admin'@'%' IDENTIFIED BY PASSWORD '*87F5F6FF9376D7C33FEB4C2AA1F7F99E9853F2DB' |
  18. | GRANT ALL PRIVILEGES ON `koten_kafka`.* TO 'admin'@'%' |
  19. +------------------------------------------------------------------------------------------------------+
  20. 2 rows in set (0.00 sec)
  21. MariaDB [(none)]> quit
  22. Bye

5、测试用户

  1. [root@ELK101 ~]# mysql -u admin -pkoten -h 10.0.0.101
  2. Welcome to the MariaDB monitor. Commands end with ; or \g.
  3. Your MariaDB connection id is 3
  4. Server version: 5.5.68-MariaDB MariaDB Server
  5. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  6. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  7. MariaDB [(none)]> show databases;
  8. +--------------------+
  9. | Database |
  10. +--------------------+
  11. | information_schema |
  12. | koten_kafka |
  13. | test |
  14. +--------------------+
  15. 3 rows in set (0.00 sec)
  16. MariaDB [(none)]> quit
  17. Bye

二、部署监控

1、下载kafka-eagle软件,下面的链接下载的慢可以用我分享在文末的链接

[root@ELK101 ~]# wget https://github.com/smartloli/kafka-eagle-bin/archive/v2.0.8.tar.gz

2、解压软件包

  1. [root@ELK101 ~]# unzip kafka-eagle-bin-2.0.8.zip
  2. Archive: kafka-eagle-bin-2.0.8.zip
  3. inflating: efak-web-2.0.8-bin.tar.gz
  4. inflating: system-config.properties
  5. [root@ELK101 ~]# tar xf efak-web-2.0.8-bin.tar.gz -C /koten/softwares/

3、修改配置文件

  1. [root@ELK101 ~]# cat /koten/softwares/efak-web-2.0.8/conf/system-config.properties
  2. efak.zk.cluster.alias=kafka
  3. kafka.zk.list=10.0.0.101:2181,10.0.0.102:2181,10.0.0.103:2181/kafka-3.2.1 #注意chroot与kafka配置文件保持一致
  4. kafka.efak.broker.size=20
  5. kafka.zk.limit.size=16
  6. efak.webui.port=8048
  7. kafka.efak.offset.storage=zk
  8. kafka.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
  9. efak.metrics.charts=true
  10. efak.metrics.retain=15
  11. efak.sql.topic.records.max=5000
  12. efak.sql.topic.preview.records.max=10
  13. efak.topic.token=koten
  14. efak.driver=com.mysql.cj.jdbc.Driver
  15. efak.url=jdbc:mysql://10.0.0.101:3306/koten_kafka?useUnicode=true&characterEncodi
  16. ng=UTF-8&zeroDateTimeBehavior=convertToNull
  17. efak.username=admin
  18. efak.password=koten #数据库密码

4、配置环境变量

  1. [root@ELK101 ~]# cat >> /etc/profile.d/kafka.sh <<'EOF'
  2. export KE_HOME=/koten/softwares/efak-web-2.0.8
  3. export PATH=$PATH:$KE_HOME/bin
  4. EOF
  5. [root@ELK101 ~]# source /etc/profile.d/kafka.sh

5、修改配置启动脚本的堆内存大小

  1. [root@ELK101 ~]# sed -i '/KE_JAVA_OPTS/s#2g#256m#g' $KE_HOME/bin/ke.sh | grep KE_JAVA_OPTS
  2. [root@ELK101 ~]# grep KE_JAVA_OPTS $KE_HOME/bin/ke.sh
  3. export KE_JAVA_OPTS="-server -Xmx256m -Xms256m -XX:MaxGCPauseMillis=20 -XX:+UseG1GC -XX:MetaspaceSize=128m -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80"

6、启动服务

  1. [root@ELK101 ~]# ke.sh start
  2. ......
  3. [2023-06-05 22:36:46] INFO: [Job done!]
  4. Welcome to
  5. ______ ______ ___ __ __
  6. / ____/ / ____/ / | / //_/
  7. / __/ / /_ / /| | / ,<
  8. / /___ / __/ / ___ | / /| |
  9. /_____/ /_/ /_/ |_|/_/ |_|
  10. ( Eagle For Apache Kafka® )
  11. Version 2.0.8 -- Copyright 2016-2021
  12. *******************************************************************
  13. * EFAK Service has started success.
  14. * Welcome, Now you can visit 'http://10.0.0.101:8048'
  15. * Account:admin ,Password:123456
  16. *******************************************************************
  17. * <Usage> ke.sh [start|status|stop|restart|stats] </Usage>
  18. * <Usage> https://www.kafka-eagle.org/ </Usage>
  19. *******************************************************************

7、登录eagle

8、忘记密码后可以进入数据库去找

  1. MariaDB [koten_kafka]> SELECT * FROM koten_kafka.ke_users;
  2. +----+-------+----------+-----------+-----------------+---------------+
  3. | id | rtxno | username | password | email | realname |
  4. +----+-------+----------+-----------+-----------------+---------------+
  5. | 1 | 1000 | admin | 123456 | admin@email.com | Administrator |
  6. +----+-------+----------+-----------+-----------------+---------------+
  7. 1 row in set (0.00 sec)
  8. MariaDB [koten_kafka]>

9、登陆进去后可以看到监控的kafka与zookeeper的一些数据信息

查看仪表盘

查看zookeeper与kafka的监控信息

10、不止可以监控数据,还可以创建topic,对kafka集群进行一些操作

kafka集群压力测试 

对kafka集群进行压力测试,方便我们了解集群的处理上限,可以作为集群调优和扩容的依据。

测试之前要先搞懂链路,如果你的生产者与kafka集群不在一个地方,那么你在一个主机进行压力测试也没有意义,确定好链路后,在实际的生产者和消费者执行压测,修改对应的主机参数即可。

1、通过下面脚本进行压测

  1. mkdir /tmp/kafka-test
  2. cat > kafka-test.sh <<'EOF'
  3. # 创建topic
  4. kafka-topics.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic kafka-2023 --replication-factor 1 --partitions 10 --create
  5. # 启动消费者消费数据
  6. nohup kafka-consumer-perf-test.sh --broker-list 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic kafka-2023 --messages 100000 &>/tmp/kafka-test/kafka-consumer.log &
  7. # 启动生产者写入数据
  8. nohup kafka-producer-perf-test.sh --num-records 100000 --record-size 1000 --topic kafka-2023 --throughput 1000 --producer-props bootstrap.servers=10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 &> /tmp/kafka-test/kafka-producer.log &
  9. EOF
  10. bash kafka-test.sh
  11. #可以根据自己要测的实际生产情况调整以下参数
  12. kafka-consumer-perf-test.sh
  13. ---messages: 指定消费消息的数量。
  14. --broker-list: 指定broker列表。
  15. --topic: 指定topic主体。
  16. kafka-producer-perf-test.sh
  17. -num-records:生产消息的数量。
  18. --record-size: 每条消息的大小,单位是字节。
  19. --topic: 指定topic主体。
  20. --throughput: 设置每秒发送的消息数量,即指定最大消息的吞吐量,若设置为-1代表不限制!
  21. --producer-props bootstrap.servers: 指定broker列表

 2、可以通过efak查看实施进度

3、也可以通过脚本输出的日志观察生产和消费速度

filebeat对接kafka

filebeat对接kafka,如果filebeat作为生产者,kafka作为消费者,可以经过kafka后再写入到es集群;filebeat作为消费者,可以读取到kafka的数据,将kafka的数据展示出来,或者写入es集群。下面给大家展示下示例。

一、filebeat作为生产者

filebeat作为生产者,所以是output到kafka,input我们就用stdin去测试。

1、kafka启动消费者

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic filebeat-to-kafka

2、执行filebeat

  1. [root@ELK101 config]# cat 01-stdin-to-kafka.yaml
  2. filebeat.inputs:
  3. - type: stdin
  4. output.kafka:
  5. hosts: ["10.0.0.101:9092", "10.0.0.102:9092", "10.0.0.103:9092"]
  6. topic: 'filebeat-to-kafka'
  7. # 执行filebeat并输入测试数据
  8. [root@ELK101 config]# filebeat -e -c config/01-stdin-to-kafka.yaml
  9. ...
  10. 1234567
  11. # kafka会消费到数据
  12. [root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic filebeat-to-kafka
  13. [2023-12-18 15:51:33,774] WARN [Consumer clientId=console-consumer, groupId=console-consumer-60747] Error while fetching metadata with correlation id 2 : {filebeat-to-kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
  14. {"@timestamp":"2023-12-18T07:54:06.389Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.17.5"},"ecs":{"version":"1.12.0"},"log":{"offset":0,"file":{"path":""}},"message":"1234567","input":{"type":"stdin"},"host":{"name":"ELK101"},"agent":{"id":"8fa0a9d7-f6d8-45b8-9355-ddf800e337fa","name":"ELK101","type":"filebeat","version":"7.17.5","hostname":"ELK101","ephemeral_id":"cef5e36c-ef9b-4c38-91d9-54f7c4db48fe"}}

二、filebeat作为消费者

1、启动filebeat进行消费

  1. [root@ELK101 config]# cat 02-kafka-to-filebeat.yaml
  2. filebeat.inputs:
  3. - type: kafka
  4. hosts:
  5. - 10.0.0.101:9092
  6. - 10.0.0.102:9092
  7. - 10.0.0.103:9092
  8. topics: ["kafka-to-filebeat"]
  9. group_id: "filebeat"
  10. output.console:
  11. pretty: true
  12. [root@ELK101 config]# filebeat -e -c config/02-kafka-to-filebeat.yaml

2、启动kafka进行生产

  1. [root@ELK101 ~]# kafka-console-producer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic kafka-to-filebeat
  2. >123456ceshi
  3. >
  4. [root@ELK101 config]# filebeat -e -c config/02-kafka-to-filebeat.yaml
  5. ...
  6. {
  7. "@timestamp": "2023-12-18T09:06:52.226Z",
  8. "@metadata": {
  9. "beat": "filebeat",
  10. "type": "_doc",
  11. "version": "7.17.5"
  12. },
  13. "ecs": {
  14. "version": "1.12.0"
  15. },
  16. "host": {
  17. "name": "ELK101"
  18. },
  19. "agent": {
  20. "version": "7.17.5",
  21. "hostname": "ELK101",
  22. "ephemeral_id": "d4c29b42-d892-4532-bbe6-cff5f5a243f9",
  23. "id": "8fa0a9d7-f6d8-45b8-9355-ddf800e337fa",
  24. "name": "ELK101",
  25. "type": "filebeat"
  26. },
  27. "kafka": {
  28. "partition": 0,
  29. "offset": 0,
  30. "key": "",
  31. "headers": [],
  32. "topic": "kafka-to-filebeat"
  33. },
  34. "message": "123456ceshi",
  35. "input": {
  36. "type": "kafka"
  37. }
  38. }

logstash对接kafka

一、logstash作为生产者

1、kafaka开始消费

[root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic logstash-to-kafka

2、启动logstash开始生产

  1. [root@ELK101 logstash_cofig]# cat 01-logstash-to-kafka.yaml
  2. input {
  3. stdin {
  4. }
  5. }
  6. output {
  7. kafka {
  8. bootstrap_servers => "10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092"
  9. topic_id => "logstash-to-kafka"
  10. }
  11. }
  12. # 启动logstash并手动写入数据
  13. [root@ELK101 logstash_cofig]# logstash -r -f 01-logstash-to-kafka.yaml
  14. ...
  15. ceshi123
  16. # kafka可以消费到数据
  17. [root@ELK101 ~]# kafka-console-consumer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic logstash-to-kafka
  18. ...
  19. 2023-12-18T08:04:25.969Z ELK101
  20. 2023-12-18T08:05:03.169Z ELK101 ceshi123

二、logstash作为消费者

1、启动logstash进行消费

  1. [root@ELK101 logstash_cofig]# cat 02-kafka-to-logstasg.yaml
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092"
  5. topics => "kafka-to-logstash"
  6. }
  7. }
  8. output {
  9. stdout {}
  10. }
  11. [root@ELK101 logstash_cofig]# logstash -r -f 02-kafka-to-logstasg.yaml

2、启动kafka进行生产

  1. [root@ELK101 ~]# kafka-console-producer.sh --bootstrap-server 10.0.0.101:9092,10.0.0.102:9092,10.0.0.103:9092 --topic kafka-to-logstash
  2. >123
  3. >
  4. [root@ELK101 logstash_cofig]# logstash -r -f 02-kafka-to-logstasg.yaml
  5. ...
  6. {
  7. "@version" => "1",
  8. "@timestamp" => 2023-12-18T09:19:51.170Z,
  9. "message" => "123"
  10. }

kafka图形化管理工具下载链接:https://pan.baidu.com/s/1_xciM_6OC0383f11phRycQ?pwd=j7ki 

我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/648553
推荐阅读
相关标签
  

闽ICP备14008679号