当前位置:   article > 正文

使用Docker安装ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9_docker 安装filebeat和elasticsearch

docker 安装filebeat和elasticsearch

目录

一、安装JDK

二、部署Elasticsearch

三、部署kibana

四、部署Logstash

五、部署filebeat

六、filebeat采集数据,logstash过滤,在kibana中显示

七、kibana增加索引


PS:本文中,ip为部署服务器的IP地址,esip为es容器的通讯ip地址。

一、安装JDK

1、更新系统

sudo yum update

2、安装Java

下面是安装OpenJDK的命令:

sudo yum install java-1.8.0-openjdk

3、验证安装

java -version

二、部署Elasticsearch

1、查看是否安装docker

docker version
  1. Client: Docker Engine - Community
  2. Version: 24.0.5
  3. API version: 1.43
  4. Go version: go1.20.6
  5. Git commit: ced0996
  6. Built: Fri Jul 21 20:39:02 2023
  7. OS/Arch: linux/amd64
  8. Context: default
  9. Server: Docker Engine - Community
  10. Engine:
  11. Version: 24.0.5
  12. API version: 1.43 (minimum version 1.12)
  13. Go version: go1.20.6
  14. Git commit: a61e2b4
  15. Built: Fri Jul 21 20:38:05 2023
  16. OS/Arch: linux/amd64
  17. Experimental: false
  18. containerd:
  19. Version: 1.6.21
  20. GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
  21. runc:
  22. Version: 1.1.7
  23. GitCommit: v1.1.7-0-g860f061
  24. docker-init:
  25. Version: 0.19.0
  26. GitCommit: de40ad0

安装最新版的docker可能导致部分系统不兼容,可以安装早些的版本。

2、查找并安装elasticsearch镜像

查找:

docker search elasticsearch
  1. NAME DESCRIPTION STARS OFFICIAL AUTOMATED
  2. elasticsearch Elasticsearch is a powerful open source sear… 6122 [OK]
  3. kibana Kibana gives shape to any kind of data — str2626 [OK]
  4. bitnami/elasticsearch Bitnami Docker Image for Elasticsearch 67 [OK]
  5. bitnami/elasticsearch-exporter Bitnami Elasticsearch Exporter Docker Image 7 [OK]
  6. rancher/elasticsearch-conf 2
  7. rapidfort/elasticsearch RapidFort optimized, hardened image for Elas… 10
  8. bitnami/elasticsearch-curator-archived A copy of the container images of the deprec… 0
  9. rapidfort/elasticsearch-official RapidFort optimized, hardened image for Elas… 0
  10. bitnamicharts/elasticsearch 0
  11. onlyoffice/elasticsearch 1
  12. rancher/elasticsearch 1
  13. couchbase/elasticsearch-connector Couchbase Elasticsearch Connector 0
  14. rancher/elasticsearch-bootstrap 1
  15. dtagdevsec/elasticsearch T-Pot Elasticsearch 4 [OK]
  16. corpusops/elasticsearch https://github.com/corpusops/docker-images/ 0
  17. vulhub/elasticsearch 0
  18. uselagoon/elasticsearch-7 0
  19. securecodebox/elasticsearch 0
  20. eucm/elasticsearch Elasticsearch 1.7.5 Docker Image 1 [OK]
  21. ilios/elasticsearch 0
  22. uselagoon/elasticsearch-6 0
  23. openup/elasticsearch-0.90 0
  24. litmuschaos/elasticsearch-stress 0
  25. drud/elasticsearch_exporter 0
  26. geekzone/elasticsearch-curator 0

安装:

docker pull elasticsearch:7.7.1
  1. 7.7.1: Pulling from library/elasticsearch
  2. 524b0c1e57f8: Pull complete
  3. 4f79045bc94a: Pull complete
  4. 4602c5830f92: Pull complete
  5. 10ef2eb1c9b1: Pull complete
  6. 47fca9194a1b: Pull complete
  7. c282e1371ecc: Pull complete
  8. 302e1effd34b: Pull complete
  9. 50acbec75309: Pull complete
  10. f89bc5c60b5f: Pull complete
  11. Digest: sha256:dff614393a31b93e8bbe9f8d1a77be041da37eac2a7a9567166dd5a2abab7c67
  12. Status: Downloaded newer image for elasticsearch:7.7.1
  13. docker.io/library/elasticsearch:7.7.1

3、查看已安装的docker镜像

docker images
  1. REPOSITORY TAG IMAGE ID CREATED SIZE
  2. elasticsearch 7.7.1 830a894845e3 3 years ago 804MB
  3. k8s.gcr.io/kube-proxy v1.17.4 6dec7cfde1e5 3 years ago 116MB
  4. registry.aliyuncs.com/google_containers/kube-proxy v1.17.4 6dec7cfde1e5 3 years ago 116MB
  5. registry.aliyuncs.com/google_containers/kube-apiserver v1.17.4 2e1ba57fe95a 3 years ago 171MB
  6. k8s.gcr.io/kube-apiserver v1.17.4 2e1ba57fe95a 3 years ago 171MB
  7. k8s.gcr.io/kube-controller-manager v1.17.4 7f997fcf3e94 3 years ago 161MB
  8. registry.aliyuncs.com/google_containers/kube-controller-manager v1.17.4 7f997fcf3e94 3 years ago 161MB
  9. registry.aliyuncs.com/google_containers/kube-scheduler v1.17.4 5db16c1c7aff 3 years ago 94.4MB
  10. k8s.gcr.io/kube-scheduler v1.17.4 5db16c1c7aff 3 years ago 94.4MB
  11. k8s.gcr.io/coredns 1.6.5 70f311871ae1 3 years ago 41.6MB
  12. k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 3 years ago 288MB
  13. registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 3 years ago 288MB
  14. k8s.gcr.io/pause 3.1 da86e6ba6ca1 5 years ago 742kB
  15. registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 5 years ago 742kB
  16. kubeguide/hadoop latest e0af06208032 6 years ago 830MB

4、创建挂载目录

[root@ceph-node4 ~]# mkdir -p /data/elk/es/{config,data,logs}

5、授权

docker中elasticsearch的用户UID是1000.

[root@ceph-node4 ~]# chown -R 1000:1000 /data/elk/es

6、创建挂载配置文件

  1. cd /data/elk/es/config
  2. touch elasticsearch.yml
  3. vi elasticsearch.yml
  1. #[elasticsearch.yml]
  2. cluster.name: "my-es"
  3. network.host: 0.0.0.0
  4. http.port: 9200

7、运行elasticsearch

通过镜像,启动一个容器,并将9200和9300端口映射到本机(elasticsearch的默认端口是9200,我们把宿主环境9200端口映射到Docker容器中的9200端口)。

docker run -it  -d -p 9200:9200 -p 9300:9300 --name es -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e "discovery.type=single-node" --restart=always -v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elk/es/data:/usr/share/elasticsearch/data -v /data/elk/es/logs:/usr/share/elasticsearch/logs elasticsearch:7.7.1
9e70d30eaa571c6a54572d5babb14e688220494ca039b292d0cb62a54a982ebb

8、验证安装是否成功

curl http://localhost:9200
  1. {
  2. "name" : "9e70d30eaa57",
  3. "cluster_name" : "my-es",
  4. "cluster_uuid" : "nWsyXGd1RtGATFs4itJ4nQ",
  5. "version" : {
  6. "number" : "7.7.1",
  7. "build_flavor" : "default",
  8. "build_type" : "docker",
  9. "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
  10. "build_date" : "2020-05-28T16:30:01.040088Z",
  11. "build_snapshot" : false,
  12. "lucene_version" : "8.5.1",
  13. "minimum_wire_compatibility_version" : "6.8.0",
  14. "minimum_index_compatibility_version" : "6.0.0-beta1"
  15. },
  16. "tagline" : "You Know, for Search"
  17. }

三、部署kibana

1、安装kibana

docker pull kibana:7.7.1
  1. 7.7.1: Pulling from library/kibana
  2. 524b0c1e57f8: Already exists
  3. 103dc10f20b6: Pull complete
  4. e397e023efd5: Pull complete
  5. f0ee6620405c: Pull complete
  6. 17e4e03944f0: Pull complete
  7. eff8f4cc3749: Pull complete
  8. fa92cc28ed7e: Pull complete
  9. afda7e77e6ed: Pull complete
  10. 019e109bb7c5: Pull complete
  11. e82949888e47: Pull complete
  12. 15f31b4d9a52: Pull complete
  13. Digest: sha256:ea0eab16b0330e6b3d9083e3c8fd6e82964fc9659989a75ecda782fbd160fdaa
  14. Status: Downloaded newer image for kibana:7.7.1
  15. docker.io/library/kibana:7.7.1

2、查看是否完成

docker images

3、获取elasticsearch容器esip

docker inspect --format '{{ .NetworkSettings.IPAddress }}' es
172.17.0.2

这里的esip是容器内部通信的ip,而不是连接外部网络的ip。

查看IP

docker inspect elasticsearch  |grep IPAddress

 查看es状态和详细esip:

docker inspect es
"IPAddress": "172.20.0.2"

4、修改配置文件

创建文件夹、生成yml文件并且赋予读写权限。

  1. sudo mkdir -p /data/elk/kibana
  2. sudo touch /data/elk/kibana/kibana.yml
  3. sudo chmod +w /data/elk/kibana/kibana.yml

编辑配置文件:

vi /data/elk/kibana/kibana.yml
  1. #[kibana.yml]
  2. #Default Kibana configuration for docker target
  3. server.name: kibana
  4. server.host: "0"
  5. elasticsearch.hosts: ["http://172.17.0.2:9200"]
  6. xpack.monitoring.ui.container.elasticsearch.enabled: true

此处的es.hosts即为http://esip:9200

5、运行kibana

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kibana -p 5601:5601 -v /data/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.7.1
87b91be986938ad581fb79354bd41895eb874ce74b0688ed6e46396691e040a4

查看状态:

docker ps | grep kibana
docker ps

若要停止并删除Kibana容器:

  1. docker stop kibana
  2. docker rm kibana

6、访问UI界面

浏览器上输入:http://ip:5601


若无法访问UI界面:

1、检查kibana容器配置文件

将配置文件中elasticsearch.hosts地址修改为elasticsearch容器地址。

docker exec -it kibana /bin/bash
vi config/kibana.yml
  1. #[kibana.yml]
  2. #Default Kibana configuration for docker target
  3. server.name: kibana
  4. server.host: "0"
  5. elasticsearch.hosts: ["http://172.17.0.2:9200"]
  6. xpack.monitoring.ui.container.elasticsearch.enabled: true

确保此处的配置与刚刚设置的相同,尤其注意esip,因为是随机分配的,每次重启服务器,所分配的esip可能都不一样。

2、重启kibana

docker restart kibana
kibana

3、查看docker容器运行情况

docker ps

 查看kibana日志

docker logs kibana

4、重新访问http://ip:5601

若启动较慢,可多刷新几次。

中文设置:

Kibana 的配置文件 kibana.yml 文件应该在 /data/elk/kibana/kibana.yml 路径下

要将 i18n.locale 配置为 zh-CN,打开 /data/elk/kibana/kibana.yml 文件,找到末尾并添加以下行:

i18n.locale: "zh-CN"

 然后,重启 Kibana 容器以使更改生效。可以使用以下命令重启 Kibana 容器:

docker restart kibana

 最后,再打开网页UI界面,就可以看到汉化了。

四、部署Logstash

1、获取logstash镜像

docker pull logstash:7.7.1
  1. 7.7.1: Pulling from library/logstash
  2. 524b0c1e57f8: Already exists
  3. 1a7635b4d6e8: Pull complete
  4. 92c26c13a43f: Pull complete
  5. 189edda23928: Pull complete
  6. 4b71f12aa7b2: Pull complete
  7. 8eae4815fe1e: Pull complete
  8. 4c2df663cec5: Pull complete
  9. bc06e285e821: Pull complete
  10. 2fadaff2f68a: Pull complete
  11. 89a9ec66a044: Pull complete
  12. 724600a30902: Pull complete
  13. Digest: sha256:cf2a17d96e76e5c7a04d85d0f2e408a0466481b39f441e9d6d0aad652e033026
  14. Status: Downloaded newer image for logstash:7.7.1
  15. docker.io/library/logstash:7.7.1

2、编辑logstash.yml配置文件。所使用目录需对应新增。

  1. mkdir /data/elk/logstash/
  2. touch /data/elk/logstash/logstash.yml
  3. vi /data/elk/logstash/logstash.yml
  1. #[logstash.yml]
  2. http.host: "0.0.0.0"
  3. xpack.monitoring.elasticsearch.hosts: [ "http://172.17.0.2:9200" ]
  4. xpack.monitoring.elasticsearch.username: elastic
  5. xpack.monitoring.elasticsearch.password: changeme
  6. #path.config: /data/elk/logstash/conf.d/*.conf
  7. path.config: /data/docker/logstash/conf.d/*.conf
  8. path.logs: /var/log/logstash

此处的es.hosts也是esip

3、编辑logstash.conf文件,此处先配置logstash直接采集本地数据发送至es

  1. mkdir /data/elk/logstash/conf.d/
  2. touch /data/elk/logstash/conf.d/syslog.conf
  3. vi /data/elk/logstash/conf.d/syslog.conf
  4. cat /data/elk/logstash/conf.d/syslog.conf

 

  1. #[syslog.conf]
  2. input {
  3. syslog {
  4. type => "system-syslog"
  5. port => 5044
  6. }
  7. }
  8. output {
  9. elasticsearch {
  10. hosts => ["ip:9200"]
  11. index => "system-syslog-%{+YYYY.MM}"
  12. }
  13. }

此处的ip为挂载容器的服务器的ip地址

4、编辑本地rsyslog配置增加:

vi /etc/rsyslog.conf 
*.* @@ip:5044

此处的ip为挂载容器的服务器的ip地址

 

5、配置修改后重启服务

systemctl restart rsyslog

6、运行logstash

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 -p 5044:5044 --name logstash -v /data/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/elk/logstash/conf.d/:/data/docker/logstash/conf.d/ logstash:7.7.1

7、测试es接收logstash数据

  1. curl http://localhost:9200/_cat/indices?v
  1. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  2. green open .apm-custom-link SUOJEG0hRlCrQ2cQUr1S7g 1 0 0 0 208b 208b
  3. green open .kibana_task_manager_1 c7ZI_gS_T1GbFrlOMlB4bw 1 0 5 0 54.9kb 54.9kb
  4. green open .apm-agent-configuration f684gzXURZK6Q13GPGZIhg 1 0 0 0 208b 208b
  5. green open .kibana_1 xtNccoc-Ru2zSoXJe8AA1Q 1 0 36 2 55.8kb 55.8kb
  6. yellow open system-syslog-2023.07 AUPeJ5I8R6-iWkdeTTJuAw 1 1 29 0 60.9kb 60.9kb

 

获取到system-syslog-相关日志,则es已能获取来自logstash的数据,kibana中也同步显示数据。

五、部署filebeat

1、在需要监测的机器yum安装filebeat

  1. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-x86_64.rpm
yum install filebeat-7.7.1-x86_64.rpm

2、filebeat配置,此处先配置filebeat直接发送数据到es

vim /etc/filebeat/filebeat.yml
  1. #=========================== Filebeat inputs =============================
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. - /var/log/ceph/*.log
  7. - /var/log/messages

完整版filbeat.yml(最新)Ceph版本

cat /etc/filebeat/filebeat.yml
  1. ###################### Filebeat Configuration Example #########################
  2. # This file is an example configuration file highlighting only the most common
  3. # options. The filebeat.reference.yml file from the same directory contains all the
  4. # supported options with more comments. You can use it as a reference.
  5. #
  6. # You can find the full configuration reference here:
  7. # https://www.elastic.co/guide/en/beats/filebeat/index.html
  8. # For more available modules and options, please see the filebeat.reference.yml sample
  9. # configuration file.
  10. #=========================== Filebeat inputs =============================
  11. filebeat.inputs:
  12. # Each - is an input. Most options can be set at the input level, so
  13. # you can use different inputs for various configurations.
  14. # Below are the input specific configurations.
  15. - type: log
  16. # Change to true to enable this input configuration.
  17. enabled: true
  18. # Paths that should be crawled and fetched. Glob based paths.
  19. paths:
  20. - /var/log/ceph/*.log
  21. #- c:\programdata\elasticsearch\logs\*
  22. fields:
  23. log_type: ceph
  24. # Exclude lines. A list of regular expressions to match. It drops the lines that are
  25. # matching any regular expression from the list.
  26. #exclude_lines: ['^DBG']
  27. # Include lines. A list of regular expressions to match. It exports the lines that are
  28. # matching any regular expression from the list.
  29. #include_lines: ['^ERR', '^WARN']
  30. # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  31. # are matching any regular expression from the list. By default, no files are dropped.
  32. #exclude_files: ['.gz$']
  33. # Optional additional fields. These fields can be freely picked
  34. # to add additional information to the crawled log files for filtering
  35. #fields:
  36. # level: debug
  37. # review: 1
  38. ### Multiline options
  39. # Multiline can be used for log messages spanning multiple lines. This is common
  40. # for Java Stack Traces or C-Line Continuation
  41. # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  42. #multiline.pattern: ^\[
  43. # Defines if the pattern set under pattern should be negated or not. Default is false.
  44. #multiline.negate: false
  45. # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  46. # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  47. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  48. #multiline.match: after
  49. #============================= Filebeat modules ===============================
  50. filebeat.config.modules:
  51. # Glob pattern for configuration loading
  52. path: ${path.config}/modules.d/*.yml
  53. # Set to true to enable config reloading
  54. reload.enabled: false
  55. # Period on which files under path should be checked for changes
  56. #reload.period: 10s
  57. #==================== Elasticsearch template setting ==========================
  58. setup.template.settings:
  59. index.number_of_shards: 1
  60. #index.codec: best_compression
  61. #_source.enabled: false
  62. #================================ General =====================================
  63. # The name of the shipper that publishes the network data. It can be used to group
  64. # all the transactions sent by a single shipper in the web interface.
  65. #name:
  66. # The tags of the shipper are included in their own field with each
  67. # transaction published.
  68. #tags: ["service-X", "web-tier"]
  69. # Optional fields that you can specify to add additional information to the
  70. # output.
  71. #fields:
  72. # env: staging
  73. #============================== Dashboards =====================================
  74. # These settings control loading the sample dashboards to the Kibana index. Loading
  75. # the dashboards is disabled by default and can be enabled either by setting the
  76. # options here or by using the `setup` command.
  77. #setup.dashboards.enabled: false
  78. # The URL from where to download the dashboards archive. By default this URL
  79. # has a value which is computed based on the Beat name and version. For released
  80. # versions, this URL points to the dashboard archive on the artifacts.elastic.co
  81. # website.
  82. #setup.dashboards.url:
  83. #============================== Kibana =====================================
  84. # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
  85. # This requires a Kibana endpoint configuration.
  86. setup.kibana:
  87. # Kibana Host
  88. # Scheme and port can be left out and will be set to the default (http and 5601)
  89. # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  90. # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  91. #host: "localhost:5601"
  92. # Kibana Space ID
  93. # ID of the Kibana Space into which the dashboards should be loaded. By default,
  94. # the Default Space will be used.
  95. #space.id:
  96. #============================= Elastic Cloud ==================================
  97. # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
  98. # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
  99. # `setup.kibana.host` options.
  100. # You can find the `cloud.id` in the Elastic Cloud web UI.
  101. #cloud.id:
  102. # The cloud.auth setting overwrites the `output.elasticsearch.username` and
  103. # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
  104. #cloud.auth:
  105. #================================ Outputs =====================================
  106. # Configure what output to use when sending the data collected by the beat.
  107. #-------------------------- Elasticsearch output ------------------------------
  108. #output.elasticsearch:
  109. # Array of hosts to connect to.
  110. #hosts: ["ip:9200"]
  111. # Protocol - either `http` (default) or `https`.
  112. #protocol: "https"
  113. # Authentication credentials - either API key or username/password.
  114. #api_key: "id:api_key"
  115. #username: "elastic"
  116. #password: "changeme"
  117. #----------------------------- Logstash output --------------------------------
  118. output.logstash:
  119. # The Logstash hosts
  120. hosts: ["ip:5044"]
  121. # Optional SSL. By default is off.
  122. # List of root certificates for HTTPS server verifications
  123. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  124. # Certificate for SSL client authentication
  125. #ssl.certificate: "/etc/pki/client/cert.pem"
  126. # Client Certificate Key
  127. #ssl.key: "/etc/pki/client/cert.key"
  128. #================================ Processors =====================================
  129. # Configure processors to enhance or manipulate events generated by the beat.
  130. processors:
  131. - add_host_metadata: ~
  132. - add_cloud_metadata: ~
  133. - add_docker_metadata: ~
  134. - add_kubernetes_metadata: ~
  135. #================================ Logging =====================================
  136. # Sets log level. The default log level is info.
  137. # Available log levels are: error, warning, info, debug
  138. #logging.level: debug
  139. # At debug level, you can selectively enable logging only for some components.
  140. # To enable all selectors use ["*"]. Examples of other selectors are "beat",
  141. # "publish", "service".
  142. #logging.selectors: ["*"]
  143. #============================== X-Pack Monitoring ===============================
  144. # filebeat can export internal metrics to a central Elasticsearch monitoring
  145. # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
  146. # reporting is disabled by default.
  147. # Set to true to enable the monitoring reporter.
  148. #monitoring.enabled: false
  149. # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
  150. # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
  151. # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
  152. #monitoring.cluster_uuid:
  153. # Uncomment to send the metrics to Elasticsearch. Most settings from the
  154. # Elasticsearch output are accepted here as well.
  155. # Note that the settings should point to your Elasticsearch *monitoring* cluster.
  156. # Any setting that is not set is automatically inherited from the Elasticsearch
  157. # output configuration, so if you have the Elasticsearch output configured such
  158. # that it is pointing to your Elasticsearch monitoring cluster, you can simply
  159. # uncomment the following line.
  160. #monitoring.elasticsearch:
  161. #================================= Migration ==================================
  162. # This allows to enable 6.7 migration aliases
  163. #migration.6_to_7.enabled: true

3、启动服务

[root@ceph-node3 ~]# systemctl restart filebeat.service

4、es接收数据查询

  1. curl http://localhost:9200/_cat/indices?v
  1. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  2. green open .apm-custom-link SUOJEG0hRlCrQ2cQUr1S7g 1 0 0 0 208b 208b
  3. green open .kibana_task_manager_1 c7ZI_gS_T1GbFrlOMlB4bw 1 0 5 0 54.9kb 54.9kb
  4. green open .apm-agent-configuration f684gzXURZK6Q13GPGZIhg 1 0 0 0 208b 208b
  5. yellow open filebeat-7.7.1-2023.07.28-000001 38f_nqi_TdWXDRbXdTV0ng 1 1 75872 0 19.8mb 19.8mb
  6. green open .kibana_1 xtNccoc-Ru2zSoXJe8AA1Q 1 0 39 2 70.3kb 70.3kb
  7. yellow open system-syslog-2023.07 AUPeJ5I8R6-iWkdeTTJuAw 1 1 31 0 111.5kb 111.5kb

可查到filebeat-7.7.1-*数据,kibana中也显示对应数据。

六、filebeat采集数据,logstash过滤,在kibana中显示

1、删除之前的logstash生成的测试数据

curl -XDELETE http://localhost:9200/system-syslog-2023.07
{"acknowledged":true}

2、修改filebeat.yml,后重启服务

  1. vim /etc/filebeat/filebeat.yml
  2. cat /etc/filebeat/filebeat.yml
  1. #-------------------------- Elasticsearch output ------------------------------
  2. #output.elasticsearch:
  3. # Array of hosts to connect to.
  4. #hosts: ["localhost:9200"]
  5. # Protocol - either `http` (default) or `https`.
  6. #protocol: "https"
  7. # Authentication credentials - either API key or username/password.
  8. #api_key: "id:api_key"
  9. #username: "elastic"
  10. #password: "changeme"
  11. #----------------------------- Logstash output --------------------------------
  12. output.logstash:
  13. # The Logstash hosts
  14. hosts: ["ip:5044"]
  15. # Optional SSL. By default is off.
  16. # List of root certificates for HTTPS server verifications
  17. #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  18. # Certificate for SSL client authentication
  19. #ssl.certificate: "/etc/pki/client/cert.pem"
  20. # Client Certificate Key
  21. #ssl.key: "/etc/pki/client/cert.key"

后重启服务

systemctl restart filebeat.service

3、修改lostash.conf配置

touch /data/elk/logstash/conf.d/logstash.conf
  1. vi /data/elk/logstash/conf.d/logstash.conf
  2. cat /data/elk/logstash/conf.d/logstash.conf
  1. input {
  2. beats {
  3. port => 5044
  4. }
  5. }
  6. output {
  7. elasticsearch {
  8. hosts => ["172.17.0.2:9200"]
  9. index => "filebeat_g-%{+YYYY.MM.dd}"
  10. }
  11. }

 

4、查看es是否获取数据

curl http://localhost:9200/_cat/indices?v
  1. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  2. green open .apm-custom-link SUOJEG0hRlCrQ2cQUr1S7g 1 0 0 0 208b 208b
  3. green open .kibana_task_manager_1 c7ZI_gS_T1GbFrlOMlB4bw 1 0 5 0 54.9kb 54.9kb
  4. green open .apm-agent-configuration f684gzXURZK6Q13GPGZIhg 1 0 0 0 208b 208b
  5. yellow open filebeat-7.7.1-2023.07.28-000001 38f_nqi_TdWXDRbXdTV0ng 1 1 76257 0 19.9mb 19.9mb
  6. green open .kibana_1 xtNccoc-Ru2zSoXJe8AA1Q 1 0 39 2 70.3kb 70.3kb
  7. yellow open system-syslog-2023.07 -sFCBdQJTx62qc6omgKEiA 1 1 25 0 291kb 291kb

filebeat_g-*数据已经获取,kibana中增加相关索引即可。

七、kibana增加索引并观测系统状态

 

 

 

 

 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/325603
推荐阅读
相关标签
  

闽ICP备14008679号