当前位置:   article > 正文

Filebeat+Redis+Logstash+Elasticsearch+Kibana搭建日志采集分析系统_filebeat+redis+logstash+kibana

filebeat+redis+logstash+kibana

环境说明

Logstash、Elasticsearch、Kibana我放到一台机器上了,用Docker搭建的环境

Redis单独一台机器

Filebeat跟需要采集日志的项目在同一台机器

安装Docker参考

 docker安装笔记

安装Redis

  1. wget http://download.redis.io/releases/redis-6.0.8.tar.gz
  2. tar xzf redis-6.0.8.tar.gz
  3. cd redis-6.0.8
  4. make
  5. # 报错/bin/sh: cc: 未找到命令,执行以下命令
  6. # yum install gcc-c++ -y
  7. # 报错致命错误:jemalloc/jemalloc.h:没有那个文件或目录,执行以下命令
  8. # make MALLOC=libc
  9. # 报错 错误:‘struct redisServer’没有名为‘unixsocket’的成员,执行以下命令
  10. # yum -y install centos-release-scl
  11. # yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
  12. # scl enable devtoolset-9 bash
  13. # 默认方式启动redis
  14. cd src
  15. ./redis-server
  16. # 补充信息
  17. # redis.conf文件中
  18. # 允许远程访问
  19. # bind 0.0.0.0
  20. # 启用后台启动
  21. # daemonize yes
  22. # 设置密码为1234567890
  23. # requirepass 1234567890
  24. # 配置方式启动redis
  25. cd src
  26. ./redis-server ../redis.conf

ELK安装

使用Docker搭建Elasticsearch:7.17.1

  1. # 拉镜像
  2. docker pull elasticsearch:7.17.1
  3. # 修改vm.max_map_count数量,在sysctl.conf最后添加vm.max_map_count
  4. vi /etc/sysctl.conf
  5. vm.max_map_count=262144
  6. # 保存sysctl.conf后重置系统设置
  7. /sbin/sysctl -p
  8. # 本机创建es挂载的配置文件和数据文件夹
  9. cd /home
  10. mkdir -p elasticsearch/config
  11. mkdir -p elasticsearch/data
  12. mkdir -p elasticsearch/plugins
  13. echo "http.host: 0.0.0.0" >> elasticsearch/config/elasticsearch.yml
  14. chmod 777 -R elasticsearch/
  15. # 启动es
  16. docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx128m" -v /home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/elasticsearch/data:/usr/share/elasticsearch/data -v /home/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.17.1

使用Docker搭建Kibana:7.17.1

  1. docker pull kibana:7.17.1
  2. docker run --name kibana --link elasticsearch:elasticsearch -p 5601:5601 -d kibana:7.17.1

使用Docker搭建logstash:7.17.1

  1. docker pull logstash:7.17.1
  2. cd /home
  3. mkdir logstash
  4. cd /home/logstash
  5. mkdir config pipeline
  6. cd /home/logstash/config
  7. touch logstash.yml
  8. vim logstash.yml
  9. # 写入一下两个配置
  10. # http.host: "0.0.0.0"
  11. # xpack.monitoring.elasticsearch.hosts: [ "http://10.0.3.102:9200" ]
  12. # 保存退出logstash.yml
  13. cd /home/logstash/pipeline
  14. touch logstash.conf
  15. vim logstash.conf
  16. # 写入input output配置,从redis获取日志信息,输出到es中
  17. # input {
  18. # redis {
  19. # host => "10.0.3.101"
  20. # port => 6379
  21. # password => "1234567890"
  22. # data_type => list
  23. # key => "filebeat"
  24. # }
  25. # }
  26. #
  27. # output {
  28. # elasticsearch {
  29. # hosts => ["http://10.0.3.102:9200"]
  30. # index => "applog"
  31. # }
  32. # }
  33. # 保存退出logstash.conf
  34. chmod 777 -R /home/logstash/
  35. docker run -d --name logstash -p 5044:5044 -p 9600:9600 -v /home/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /home/logstash/pipeline/:/usr/share/logstash/pipeline/ logstash:7.17.1

Filebeat安装,和需要采集日志的项目放在同一台机器

  1. cd /home
  2. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.1-linux-x86_64.tar.gz
  3. tar -xvf filebeat-7.17.1-linux-x86_64.tar.gz
  4. mv filebeat-7.17.1-linux-x86_64 filebeat
  5. cd filebeat
  6. touch log_redis.yml
  7. vi log_redis.yml
  8. # log_redis.yml替换成以下内容
  9. # .global: &global
  10. # ignore_older: 30m
  11. # scan_frequency: 5m
  12. # harvester_limit: 1
  13. # close_inactive: 1m
  14. # clean_inactive: 45m
  15. # close_removed: true
  16. # clean_removed: true
  17. # filebeat.inputs:
  18. # - type: log
  19. # enabled: true
  20. # paths:
  21. # - /opt/myproject/logs/catalina.out
  22. # <<: *global
  23. # output.redis:
  24. # hosts: ["10.0.3.101"]
  25. # key: "filebeat"
  26. # password: "1234567890"
  27. # db: 0
  28. # timeout: 5
  29. # 保存退出log_redis.yml
  30. # 运行filebeat
  31. nohup ./filebeat -c log_redis.yml &

检查日志是否采集成功

登录kibana

http://10.0.3.102:5601/

找到Index Management

查看applog这个index是否创建了

创建一个Index patterns

去discover看一下日志是否正常采集

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/848342
推荐阅读
相关标签
  

闽ICP备14008679号