当前位置:   article > 正文

3节点ubuntu22.04服务器docker-compose方式部署高可用elk+kafka日志系统并接入nginx日志_ubuntu三台机器中安装docker

ubuntu三台机器中安装docker

一:系统版本:

二:部署环境:

节点名称

IP

部署组件及版本

配置文件路径

机器CPU

机器内存

机器存储

Log-001

10.10.100.1

zookeeper:3.4.13

kafka:2.8.1

elasticsearch:7.7.0

logstash:7.7.0

kibana:7.7.0

zookeeper:/data/zookeeper

kafka:/data/kafka

elasticsearch:/data/es

logstash:/data/logstash

kibana:/data/kibana

2*1c/16cores

62g

50g 系统

800g 数据盘

Log-002

10.10.100.2

zookeeper:3.4.13

kafka:2.8.1

elasticsearch:7.7.0

logstash:7.7.0

kibana:7.7.0

zookeeper:/data/zookeeper

kafka:/data/kafka

elasticsearch:/data/es

logstash:/data/logstash

kibana:/data/kibana

2*1c/16cores

62g

50g 系统

800g 数据盘

Log-003

10.10.100.3

zookeeper:3.4.13

kafka:2.8.1

elasticsearch:7.7.0

logstash:7.7.0

kibana:7.7.0

zookeeper:/data/zookeeper

kafka:/data/kafka

elasticsearch:/data/es

logstash:/data/logstash

kibana:/data/kibana

2*1c/16cores

62g

50g 系统

800g 数据盘

三:部署流程:

(1)安装docker和docker-compose

  1. apt-get install -y docker
  2. wget https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64 mv docker-compose-Linux-x86_64 /usr/bin/docker-compose

(2)提前拉取需要用到的镜像

  1. docker pull zookeeper:3.4.13
  2. docker pull wurstmeister/kafka
  3. docker pull elasticsearch:7.7.0
  4. docker pull daocloud.io/library/kibana:7.7.0
  5. docker pull daocloud.io/library/logstash:7.7.0
  6. docker tag wurstmeister/kafka:latest kafka:2.12-2.5.0
  7. docker tag docker.io/zookeeper:3.4.13 docker.io/zookeeper:3.4.13
  8. docker tag daocloud.io/library/kibana:7.7.0 kibana:7.7.0
  9. docker tag daocloud.io/library/logstash:7.7.0 logstash:7.7.0

(3)准备应用的配置文件

  1. mkdir -p /data/zookeeper
  2. mkdir -p /data/kafka
  3. mkdir -p /data/logstash/conf
  4. mkdir -p /data/es/conf
  5. mkdir -p /data/es/data
  6. chmod 777 /data/es/data
  7. mkdir -p /data/kibana

(4)编辑各组件配置文件

  1. ## es配置文件
  2. ~]# cat /data/es/conf/elasticsearch.yml
  3. cluster.name: es-cluster
  4. network.host: 0.0.0.0
  5. node.name: master1 ## 每台节点需要更改此node.name,e.g master2,master3
  6. http.cors.enabled: true
  7. http.cors.allow-origin: "*" ## 防止跨域问题
  8. node.master: true
  9. node.data: true
  10. network.publish_host: 10.10.100.1 ## 每台节点需要更改为本机IP地址
  11. discovery.zen.minimum_master_nodes: 1
  12. discovery.zen.ping.unicast.hosts: ["10.10.100.1","10.10.100.2","10.10.100.3"]
  13. cluster.initial_master_nodes: ["10.10.100.1","10.10.100.2","10.10.100.3"]
  14. ## elasticsearch启动过程会有报错,提前做以下操作
  15. ~]# vim /etc/sysctl.conf
  16. vm.max_map_count=655350
  17. ~]# sysctl -p
  18. ~]# cat /etc/security/limits.conf
  19. * - nofile 100000
  20. * - fsize unlimited
  21. * - nproc 100000 ## unlimited nproc for *
  22. ## logstash配置文件
  23. ~]# cat /data/logstash/conf/logstash.conf
  24. input{
  25. kafka{
  26. topics => ["system-log"] ## 必须和前后配置的topic统一
  27. bootstrap_servers => ["10.10.100.1:9092,10.10.100.2:9092,10.10.100.3:9092"]
  28. }
  29. }
  30. filter{
  31. grok{
  32. match =>{
  33. "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IP:ip} %{DATA:syslog_program} %{GREEDYDATA:message}"
  34. }
  35. overwrite => ["message"]
  36. }
  37. date {
  38. match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
  39. }
  40. }
  41. output{
  42. elasticsearch{
  43. hosts => ["10.10.100.1:9200","10.10.100.2:9200","10.10.100.3:9200"]
  44. index => "system-log-%{+YYYY.MM.dd}"
  45. }
  46. stdout{
  47. codec => rubydebug
  48. }
  49. }
  50. ~]# cat /data/logstash/conf/logstash.yml
  51. http.host: "0.0.0.0"
  52. xpack.monitoring.elasticsearch.hosts: [ "http://10.10.100.1:9200","http://10.10.100.2:9200","http://10.10.100.3:9200" ]
  53. ## kibana配置文件
  54. ~]# cat /data/kibana/conf/kibana.yml
  55. #
  56. # ** THIS IS AN AUTO-GENERATED FILE **
  57. #
  58. # Default Kibana configuration for docker target
  59. server.name: kibana
  60. server.host: "0.0.0.0"
  61. elasticsearch.hosts: [ "http://10.10.100.1:9200","http://10.10.100.2:9200","http://10.10.100.3:9200" ]
  62. monitoring.ui.container.elasticsearch.enabled: true

(5)所有组件的部署方式全部为docker-compose形式编排部署,docker-compose.yml文件所在路径/root/elk_docker_compose/,编排内容:

  1. ~]# mkdir /data/elk
  2. ~]# cat /root/elk_docker_compose/docker-compose.yml
  3. version: '2.1'
  4. services:
  5. elasticsearch:
  6. image: elasticsearch:7.7.0
  7. container_name: elasticsearch
  8. environment:
  9. ES_JAVA_OPTS: -Xms1g -Xmx1g
  10. network_mode: host
  11. volumes:
  12. - /data/es/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
  13. - /data/es/data:/usr/share/elasticsearch/data
  14. logging:
  15. driver: json-file
  16. kibana:
  17. image: kibana:7.7.0
  18. container_name: kibana
  19. depends_on:
  20. - elasticsearch
  21. volumes:
  22. - /data/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml
  23. logging:
  24. driver: json-file
  25. ports:
  26. - 5601:5601
  27. logstash:
  28. image: logstash:7.7.0
  29. container_name: logstash
  30. volumes:
  31. - /data/logstash/conf/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
  32. - /data/logstash/conf/logstash.yml:/usr/share/logstash/config/logstash.yml
  33. depends_on:
  34. - elasticsearch
  35. logging:
  36. driver: json-file
  37. ports:
  38. - 4560:4560
  39. zookeeper:
  40. image: zookeeper:3.4.13
  41. container_name: zookeeper
  42. environment:
  43. ZOO_PORT: 2181
  44. ZOO_DATA_DIR: /data/zookeeper/data
  45. ZOO_DATA_LOG_DIR: /data/zookeeper/logs
  46. ZOO_MY_ID: 1
  47. ZOO_SERVERS: "server.1=10.10.100.1:2888:3888 server.2=10.10.100.2:2888:3888 server.3=10.10.100.3:2888:3888"
  48. volumes:
  49. - /data/zookeeper:/data/zookeeper
  50. network_mode: host
  51. logging:
  52. driver: json-file
  53. kafka:
  54. image: kafka:2.12-2.5.0
  55. container_name: kafka
  56. depends_on:
  57. - zookeeper
  58. environment:
  59. KAFKA_BROKER_ID: 1
  60. KAFKA_PORT: 9092
  61. KAFKA_HEAP_OPTS: "-Xms1g -Xmx1g"
  62. KAFKA_HOST_NAME: 10.10.100.145
  63. KAFKA_ADVERTISED_HOST_NAME: 10.10.100.1
  64. KAFKA_LOG_DIRS: /data/kafka
  65. KAFKA_ZOOKEEPER_CONNECT: 10.10.100.1:2181,10.10.100.2:2181,10.10.100.3:2181
  66. network_mode: host
  67. volumes:
  68. - /data:/data
  69. logging:
  70. driver: json-file

(6)启动服务

  1. #开始部署(三台节点分别修改配置文件和docker-compose配置)
  2. ~]# docker-compose up -d
  3. #停止运行的容器实例
  4. ~]# docker-compose stop
  5. #单独启动容器
  6. ~]# docker-compose up -d kafka

(7)验证集群各组件服务状态

  1. (1) 验证zookeeper:
  2. ]# docker exec -it zookeeper bash
  3. bash-4.4# zkServer.sh status
  4. ZooKeeper JMX enabled by default
  5. Using config: /conf/zoo.cfg
  6. Mode: follower
  7. (2) 验证kafka:
  8. ]# docker exec -it kafka bash
  9. bash-4.4# kafka-topics.sh --list --zookeeper 10.10.100.1:2181
  10. __consumer_offsets
  11. system-log
  12. (3) 验证elasticsearch
  13. ]# curl '10.10.100.1:9200/_cat/nodes?v'ip
  14. heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
  15. 10.10.100.1 57 81 0 0.37 0.15 0.09 dilmrt * master2
  16. 10.10.100.2 34 83 0 0.11 0.10 0.06 dilmrt - master1
  17. 10.10.100.3 24 81 0 0.03 0.06 0.06 dilmrt - master3
  18. (4) 验证kibana
  19. 浏览器打开http://10.10.100.1:5601

三:日志收集

(1)以nginx日志为例,安装filebeat日志采集器

apt-get install filebeat

(2)配置filebeat向kafka写数据

  1. 启用 Nginx 模块:
  2. sudo filebeat modules enable nginx
  3. 配置 Nginx 模块: 编辑 /etc/filebeat/modules.d/nginx.yml,确保日志路径正确。例如:
  4. - module: nginxaccess:enabled: truevar.paths: ["/var/log/nginx/access.log*"]
  5. error:enabled: truevar.paths: ["/var/log/nginx/error.log*"]
  6. 设置输出为 Kafka: 在 filebeat.yml 文件中,配置输出到 Kafka:
  7. output.kafka:
  8. # Kafka 服务地址
  9. hosts: ["10.10.100.1:9092", "10.10.100.2:9092", "10.10.100.3:9092"]
  10. topic: "system-log"
  11. partition.round_robin:
  12. reachable_only: false
  13. required_acks: 1
  14. compression: gzip
  15. max_message_bytes: 1000000
  16. 重启 Filebeat:
  17. sudo systemctl restart filebeat

(3)配置验证:使用 Filebeat 的配置测试命令来验证配置文件的正确性:

filebeat test config

(4)连接测试:可以测试 Filebeat 到 Kafka 的连接:

filebeat test output

(5)登录kibana控制台查看nginx日志是否已正常收集到

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/569441
推荐阅读
相关标签
  

闽ICP备14008679号