当前位置:   article > 正文

Elasticsearch 集群日志收集搭建_es 日志

es 日志

Elasticsearch-7.2.0+Logstash-7.2.0+Kibana-7.2.0+-Filebeat-7.6.0

第一台集群内网ip:10.0.0.223

ES配置文件:/es_data/es/elasticsearch-7.2.0/config/elasticsearch.yml

ES启动命令:/es_data/es/elasticsearch-7.2.0/bin/elasticsearch

  1. cluster.name: es-search
  2. node.name: node-machine-name
  3. node.master: true
  4. node.data: true
  5. path.data: /es_data/data/es
  6. path.logs: /es_data/log/es
  7. transport.tcp.port: 9300
  8. transport.tcp.compress: true
  9. http.port: 9200
  10. network.host: 10.0.0.223
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
  14. http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
  15. cluster.initial_master_nodes: ["10.0.0.223","10.0.1.9","10.0.1.10"]
  16. discovery.seed_hosts: ["10.0.0.223", "10.0.1.10", "10.0.1.9"]
  17. gateway.recover_after_nodes: 2
  18. gateway.expected_nodes: 2
  19. gateway.recover_after_time: 5m
  20. action.destructive_requires_name: false
  21. cluster.routing.allocation.disk.threshold_enabled: true
  22. cluster.routing.allocation.disk.watermark.low: 20gb
  23. cluster.routing.allocation.disk.watermark.high: 10gb
  24. cluster.routing.allocation.disk.watermark.flood_stage: 5gb
  25. # 需求锁住物理内存,是:true、否:false
  26. bootstrap.memory_lock: false
  27. # SecComp检测,是:true、否:false
  28. bootstrap.system_call_filter: false

Kibana配置文件:/es_data/es/kibana-7.2.0/config/kibana.yml

启动命令:/es_data/es/kibana-7.2.0/bin/kibana

  1. server.port: 5601
  2. server.host: "localhost"
  3. server.basePath: ""
  4. server.rewriteBasePath: false
  5. elasticsearch.hosts:["http://10.0.0.223:9200","http://10.0.1.9:9200","http://10.0.1.10:9200"]
  6. kibana.index: ".kibana"
  7. i18n.locale: "zh-CN"

Kibana nginx   代理配置文件

  1. server {
  2. listen 80;
  3. server_name www.elasticsearch.com;
  4. client_max_body_size 1000m;
  5. location / {
  6. proxy_read_timeout 300;
  7. proxy_connect_timeout 300;
  8. proxy_redirect off;
  9. proxy_http_version 1.1;
  10. proxy_set_header Host $http_host;
  11. proxy_set_header X-Real-IP $remote_addr;
  12. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  13. proxy_set_header X-Forwarded-Proto http;
  14. proxy_pass http://10.0.0.223:9200;
  15. }
  16. }

第二台ES集群内网ip:10.0.1.10

ES配置文件:/es_data/es/elasticsearch-7.2.0/config/elasticsearch.yml

  1. cluster.name: es-search
  2. node.name: node-machine-name
  3. node.master: true
  4. node.data: true
  5. path.data: /es_data/data/es
  6. path.logs: /es_data/log/es
  7. transport.tcp.port: 9300
  8. transport.tcp.compress: true
  9. http.port: 9200
  10. network.host: 10.0.1.10
  11. http.cors.enabled: true
  12. http.cors.allow-origin: "*"
  13. http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
  14. http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
  15. cluster.initial_master_nodes: ["10.0.0.223","10.0.1.9","10.0.1.10"]
  16. discovery.seed_hosts: ["10.0.0.223", "10.0.1.10", "10.0.1.9"]
  17. gateway.recover_after_nodes: 2
  18. gateway.expected_nodes: 2
  19. gateway.recover_after_time: 5m
  20. action.destructive_requires_name: false
  21. cluster.routing.allocation.disk.threshold_enabled: true
  22. cluster.routing.allocation.disk.watermark.low: 20gb
  23. cluster.routing.allocation.disk.watermark.high: 10gb
  24. cluster.routing.allocation.disk.watermark.flood_stage: 5gb
  25. # 需求锁住物理内存,是:true、否:false
  26. bootstrap.memory_lock: false
  27. # SecComp检测,是:true、否:false
  28. bootstrap.system_call_filter: false

Logstatsh 接收器配置

启动命令:/es_data/es/logstash-7.2.0/bin/logstash -f /es_data/es/logstash-7.2.0/conf.d/es_log.conf --path.data=/es_data/data/logstash/es_log/

配置文件:/es_data/es/logstash-7.2.0/conf.d/es_log.conf

  1. input {
  2. beats {
  3. port => 5044
  4. }
  5. }
  6. filter {
  7. mutate {
  8. split => ["message", "|"]
  9. }
  10. if [message][3] =~ '[0-9a-z]{40}' {
  11. mutate {
  12. add_field => { "log_time" => "%{[message][0]}"}
  13. add_field => { "log_level" => "%{[message][1]}"}
  14. add_field => { "log_process_id" => "%{[message][2]}"}
  15. add_field => { "log_session" => "%{[message][3]}"}
  16. add_field => { "log_file_name" => "%{[message][6]}"}
  17. add_field => { "log_func_name" => "%{[message][7]}"}
  18. add_field => { "log_line" => "%{[message][8]}"}
  19. }
  20. mutate {
  21. update => { "message" => "%{[message][9]}" }
  22. }
  23. }
  24. else if [message][2] =~ '[0-9a-z]+-[0-9a-z]+-[0-9a-z]+-[0-9a-z]+-[0-9a-z]' {
  25. mutate {
  26. add_field => { "log_time" => "%{[message][0]}"}
  27. add_field => { "log_level" => "%{[message][1]}"}
  28. add_field => { "log_process_id" => "%{[message][3]}"}
  29. add_field => { "log_session" => "%{[message][2]}"}
  30. add_field => { "log_thread_id" => "%{[message][4]}"}
  31. add_field => { "log_file_name" => "%{[message][5]}"}
  32. add_field => { "log_func_name" => "%{[message][6]}"}
  33. add_field => { "log_line" => "%{[message][7]}"}
  34. }
  35. mutate {
  36. update => { "message" => "%{[message][8]}" }
  37. }
  38. }
  39. else
  40. {
  41. mutate {
  42. split => ["message", ","]
  43. }
  44. mutate {
  45. add_field => { "log_time" => "%{[message][0]}"}
  46. add_field => { "log_level" => "%{[message][1]}"}
  47. add_field => { "log_process_id" => "%{[message][2]}"}
  48. }
  49. mutate {
  50. update => { "message" => "%{[message][3]}" }
  51. }
  52. }
  53. mutate {
  54. strip => ["log_time"]
  55. }
  56. }
  57. output {
  58. elasticsearch {
  59. hosts => ["10.0.0.223:9200","10.0.1.10:9200","10.0.1.9:9200"]
  60. index => "supervisor-log-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  61. }
  62. }

第三台ES集群内网ip:10.0.1.9

ES配置文件:/es_data/elasticsearch-7.2.0/config/elasticsearch.yml

  1. cluster.name: es-search
  2. node.name: node-machine-name
  3. node.master: true
  4. node.data: true
  5. path.data: /es_data/data/es
  6. path.logs: /es_data/log/es
  7. transport.tcp.port: 9300
  8. transport.tcp.compress: true
  9. http.port: 9200
  10. network.host: 10.0.1.9
  11. # 增加新的参数,这样head插件可以访问es (5.x版本,如果没有可以自己手动加)
  12. http.cors.enabled: true
  13. http.cors.allow-origin: "*"
  14. http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
  15. http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
  16. cluster.initial_master_nodes: ["10.0.0.223","10.0.1.9","10.0.1.10"]
  17. discovery.seed_hosts: ["10.0.0.223", "10.0.1.10", "10.0.1.9"]
  18. gateway.recover_after_nodes: 2
  19. gateway.expected_nodes: 2
  20. gateway.recover_after_time: 5m
  21. action.destructive_requires_name: false
  22. cluster.routing.allocation.disk.threshold_enabled: true
  23. cluster.routing.allocation.disk.watermark.low: 20gb
  24. cluster.routing.allocation.disk.watermark.high: 10gb
  25. cluster.routing.allocation.disk.watermark.flood_stage: 5gb
  26. # 需求锁住物理内存,是:true、否:false
  27. bootstrap.memory_lock: false
  28. # SecComp检测,是:true、否:false
  29. bootstrap.system_call_filter: false

第四台具体生产日志的地方

Filebeat配置:/www/filebeat/filebeat.yml

启动命令:/www/filebeat/filebeat -e -c /www/filebeat/filebeat.yml

  1. - type: log
  2. enabled: true
  3. document_type: "supervisor"
  4. exclude_files: ["filebeat-out.log$"]
  5. paths:
  6. - /var/log/supervisor/*.log
  7. fields:
  8. type: supervisor
  9. encoding: plain
  10. input_type: log
  11. multiline.pattern: '^\s\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2}'
  12. multiline.negate: true
  13. multiline.match: after
  14. # Filebeat modules
  15. filebeat.config.modules:
  16. path: ${path.config}/modules.d/*.yml
  17. reload.enabled: false
  18. # Elasticsearch template setting
  19. setup.template.settings:
  20. index.number_of_shards: 1
  21. index.number_of_replicas: 0
  22. setup.template.overwrite: true
  23. setup.template.name: "machine-name"
  24. setup.template.pattern: "machine-name*"
  25. # 生命周期管理
  26. setup.ilm.enabled: true
  27. setup.ilm.rollover_alias: "machine-name"
  28. setup.ilm.pattern: "{now/d}-000001"
  29. setup.ilm.policy_name: "machine-name-policy"
  30. # Logstash output
  31. output.logstash:
  32. enabled: true
  33. hosts: ["10.0.1.10:5044"]
  34. worker: 1
  35. compression_level: 3
  36. loadbalance: true
  37. pipelining: 0
  38. index: 'log_index'
  39. # Processors
  40. processors:
  41. - add_host_metadata: ~
  42. - add_cloud_metadata: ~
  43. - add_docker_metadata: ~
  44. - add_kubernetes_metadata: ~

Kibana 配置:

设置-索引模式:创建索引模式 supervisor-log-*

上面的索引管理里面就可以看到已经匹配的索引了

创建生命周期策略-操作匹配索引模版,自动移除周期外的日志,保证集群健康运行

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/836683
推荐阅读
相关标签
  

闽ICP备14008679号