当前位置:   article > 正文

日志系统二(ilogtail+kafka+logstash+es+kibana)

ilogtail

流程介绍:

  1. ilogtail日志采集写入kafka指定Topic
  2. Logstash 消费 kafak 消息 写入 ES
  3. Kibana 展示数据

 注:
ilogtail采集的日志也能直接写入ES,需求 ES 版本 8.0 +

一、ilogtail介绍

简介

iLogtail 为可观测场景而生,拥有的轻量级、高性能、自动化配置等诸多生产级别特性,在阿里巴巴以及外部数万家阿里云客户内部广泛应用。你可以将它部署于物理机,虚拟机,Kubernetes 等多种环境中来采集遥测数据,例如logs、traces 和 metrics。

产品优势

对于可观测数据的采集,有很多开源的采集器,例如 Logstash、Fluentd、Filebeats 等。这些采集器的功能非常丰富,但在性能、稳定性、管控能力等关键特性方面 iLogtail 因其独特设计而具备优势。

ilogtail 分社区版和商业版,本文采用开源社区版本 Kubernetes DaemonSet部署方式

ilogtail部署

使用前提

● 已部署 Kubernetes 集群
● 具备访问 Kubernetes 集群的 kubectl

二、ilogtail部署

ilogtail-daemonset.yaml

  1. ---
  2. apiVersion: v1
  3. kind: Namespace
  4. metadata:
  5. name: ilogtail
  6. ---
  7. apiVersion: apps/v1
  8. kind: DaemonSet
  9. metadata:
  10. name: ilogtail-ds
  11. namespace: ilogtail
  12. labels:
  13. k8s-app: logtail-ds
  14. spec:
  15. selector:
  16. matchLabels:
  17. k8s-app: logtail-ds
  18. template:
  19. metadata:
  20. labels:
  21. k8s-app: logtail-ds
  22. spec:
  23. containers:
  24. - name: logtail
  25. env:
  26. - name: ALIYUN_LOG_ENV_TAGS # add log tags from env
  27. value: _node_name_|_node_ip_
  28. - name: _node_name_
  29. valueFrom:
  30. fieldRef:
  31. apiVersion: v1
  32. fieldPath: spec.nodeName
  33. - name: _node_ip_
  34. valueFrom:
  35. fieldRef:
  36. apiVersion: v1
  37. fieldPath: status.hostIP
  38. - name: cpu_usage_limit
  39. value: "1"
  40. - name: mem_usage_limit
  41. value: "512"
  42. image: >-
  43. sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:latest
  44. imagePullPolicy: IfNotPresent
  45. resources:
  46. limits:
  47. cpu: 1000m
  48. memory: 1Gi
  49. requests:
  50. cpu: 400m
  51. memory: 400Mi
  52. volumeMounts:
  53. - mountPath: /var/run
  54. name: run
  55. - mountPath: /logtail_host
  56. mountPropagation: HostToContainer
  57. name: root
  58. readOnly: true
  59. - mountPath: /usr/local/ilogtail/checkpoint
  60. name: checkpoint
  61. - mountPath: /usr/local/ilogtail/user_yaml_config.d
  62. name: user-config
  63. readOnly: true
  64. dnsPolicy: ClusterFirstWithHostNet
  65. hostNetwork: true
  66. volumes:
  67. - hostPath:
  68. path: /var/run
  69. type: Directory
  70. name: run
  71. - hostPath:
  72. path: /
  73. type: Directory
  74. name: root
  75. - hostPath:
  76. path: /var/lib/ilogtail-ilogtail-ds/checkpoint
  77. type: DirectoryOrCreate
  78. name: checkpoint
  79. - hostPath:
  80. path: /webtv/ilogtail-ilogtail-ds/user_yaml_config.d
  81. type: DirectoryOrCreate
  82. name: user-config

注:

  • 当前iLogtail社区版暂时不支持配置热加载,因此这里我们先创建配置,后启动iLogtail容器。若后续需要更改,可以修改configmap后,重启ilogtail的pod/container使其生效

  • ConfigMap期望以文件夹的方式挂载到iLogtail容器中作为采集配置目录,因此可以包含多个iLogtail采集配置文件

  • 设置了节点的容忍性:不在master节点部署

  • 若需要采集的日志文件数量很多,则需要适当地放宽资源限制

    /var/run:iLogtail与容器运行时通信的socket
    /logtail_host:iLogtail通过挂载主机目录获取节点上所有容器的日志
    /usr/local/ilogtail/checkpoint:将状态持久化到主机磁盘,iLogtail容器重启不丢失
    /usr/local/ilogtail/user_yaml_config.d:将configmap中的配置挂载到容器中

将采集文件存放至主机/webtv/ilogtail-ilogtail-ds/user_yaml_config.d下,可在以上yaml文件中自定义映射主机目录,业务需求采集以下目录/var/log/nginx/*access.log,/var/log/nginx/error.log,/var/log/nginx/*access.log,/usr/local/tomcat/logs/cronlog/access*.log,/usr/local/tomcat/logs/cronlog/*.log,/mcloud/*.log

采集文件如下:

nginx_access.yaml

  1. enable: true
  2. inputs:
  3. # 采集文件日志
  4. - Type: file_log
  5. LogPath: /var/log/nginx/
  6. FilePattern: "*access.log"
  7. MaxDepth: 0
  8. # 是否为容器日志
  9. ContainerFile: true
  10. processors:
  11. - Type: processor_json
  12. SourceKey: content
  13. # 保留原始字段
  14. KeepSource: false
  15. # JSON展开的深度
  16. ExpandDepth: 4
  17. # 展开时的连接符
  18. ExpandConnector: "_"
  19. # 是否将原始字段名作为前缀
  20. #UseSourceKeyAsPrefix: true
  21. - Type: processor_grok
  22. SourceKey: content
  23. KeepSource: false
  24. # 匹配的Grok表达式数组
  25. Match:
  26. - '\[(?<time_local>.*?)\] \[(?<remote_addr>[\d\.]+)\] \"\[(?<http_x_forwarded_for>.*?)\]\" \"\[(?<request>\w+ [^\\"]*)\]\" \"\[(?<request_time>[\d\.]+)\]\"\[(?<status>\d+)\] \[(?<host_request_uri>.*?)\]'
  27. - '(?<remote_addr>[\d\.]+) - - \[(?<time_local>\S+ \S+)\] \"(?<request>\w+ [^\\"]*)\" (?<status>[\d\.]+) (?<body_bytes_sent>\d+) \"(?<http_referer>.*?)\" \"(?<http_user_agent>.*?)\" \"(?<http_x_forwarded_for>.*?)\"'
  28. # false:解析失败时丢弃日志
  29. IgnoreParseFailure: true
  30. # 采集日志发送到kafka
  31. flushers:
  32. - Type: flusher_kafka_v2
  33. Brokers:
  34. - 192.168.6.242:9092
  35. Topic: nginx-access-logs

 nginx_err.yaml

  1. enable: true
  2. inputs:
  3. # 采集文件日志
  4. - Type: file_log
  5. LogPath: /var/log/nginx/
  6. FilePattern: "error.log"
  7. MaxDepth: 0
  8. # 是否为容器日志
  9. ContainerFile: true
  10. processors:
  11. - Type: processor_split_log_regex
  12. SplitRegex: .*\d+:\d+:\d+.*
  13. SplitKey: content
  14. PreserveOthers: true
  15. - Type: processor_grok
  16. SourceKey: content
  17. KeepSource: false
  18. Match:
  19. - '(?<datetime>\d+/\d+/\d+ \d+:\d+:\d+) \[(?<log_level>\w+)\] (?<pid>\d+)#\d+: \*(?<number>\d+) (?<error_message>[\w\W]*?), client: (?<clientip>[\d\.]+), server: (?<server>.*?), request: \"(?<request>.*?)\", host: \"(?<host>.*?)\"'
  20. IgnoreParseFailure: true
  21. # 采集日志发送到kafka
  22. flushers:
  23. - Type: flusher_kafka_v2
  24. Brokers:
  25. - 192.168.6.242:9092
  26. Topic: nginx-error-logs

nginx_logs.yaml

  1. enable: true
  2. inputs:
  3. # 采集文件日志
  4. - Type: file_log
  5. LogPath: /var/log/nginx/access/
  6. FilePattern: "*.log"
  7. MaxDepth: 0
  8. # 是否为容器日志
  9. ContainerFile: true
  10. processors:
  11. - Type: processor_json
  12. SourceKey: content
  13. # 保留原始字段
  14. KeepSource: false
  15. # JSON展开的深度
  16. ExpandDepth: 3
  17. # 展开时的连接符
  18. ExpandConnector: "_"
  19. # 是否将原始字段名作为前缀
  20. #UseSourceKeyAsPrefix: true
  21. - Type: processor_grok
  22. SourceKey: content
  23. KeepSource: false
  24. # 匹配的Grok表达式数组
  25. Match:
  26. - '\[(?<time_local>.*?)\] \[(?<remote_addr>[\d\.]+)\] \"\[(?<http_x_forwarded_for>.*?)\]\" \"\[(?<request>\w+ [^\\"]*)\]\" \"\[(?<request_time>[\d\.]+)\]\"\[(?<status>\d+)\] \[(?<host_request_uri>.*?)\]'
  27. - '(?<remote_addr>[\d\.]+) - - \[(?<time_local>\S+ \S+)\] \"(?<request>\w+ [^\\"]*)\" (?<status>[\d\.]+) (?<body_bytes_sent>\d+) \"(?<http_referer>.*?)\" \"(?<http_user_agent>.*?)\" \"(?<http_x_forwarded_for>.*?)\"'
  28. # false:解析失败时丢弃日志
  29. IgnoreParseFailure: true
  30. # 采集日志发送到kafka
  31. flushers:
  32. - Type: flusher_kafka_v2
  33. Brokers:
  34. - 192.168.6.242:9092
  35. Topic: nginx-access-logs

 tomcat_access.yaml

  1. enable: true
  2. # 输入配置
  3. inputs:
  4. # 采集文件日志
  5. - Type: file_log
  6. LogPath: /usr/local/tomcat/logs/cronlog/
  7. FilePattern: "access*.log"
  8. MaxDepth: 0
  9. # 是否为容器日志
  10. ContainerFile: true
  11. processors:
  12. - Type: processor_json
  13. SourceKey: content
  14. # 保留原始字段
  15. KeepSource: false
  16. # JSON展开的深度
  17. ExpandDepth: 3
  18. # 展开时的连接符
  19. ExpandConnector: "_"
  20. # 是否将原始字段名作为前缀
  21. #UseSourceKeyAsPrefix: true
  22. # 采集日志发送到kafka
  23. flushers:
  24. - Type: flusher_kafka_v2
  25. Brokers:
  26. - 192.168.6.242:9092
  27. Topic: tomcat-access-logs

tomcat_catalina.yaml

  1. enable: true
  2. # 输入配置
  3. inputs:
  4. # 采集文件日志
  5. - Type: file_log
  6. LogPath: /usr/local/tomcat/logs/
  7. FilePattern: "catalina*.log"
  8. MaxDepth: 0
  9. # 是否为容器日志
  10. ContainerFile: true
  11. processors:
  12. - Type: processor_split_log_regex
  13. SplitRegex: .*\d+:\d+:\d+.*
  14. SplitKey: content
  15. PreserveOthers: true
  16. # 采集日志发送到kafka
  17. flushers:
  18. - Type: flusher_kafka_v2
  19. Brokers:
  20. - 192.168.6.242:9092
  21. Topic: tomcat-app-logs

 tomcat_cronlog.yaml

  1. enable: true
  2. # 输入配置
  3. inputs:
  4. # 采集文件日志
  5. - Type: file_log
  6. LogPath: /usr/local/tomcat/logs/cronlog/
  7. FilePattern: "*.log"
  8. MaxDepth: 0
  9. # 是否为容器日志
  10. ContainerFile: true
  11. processors:
  12. - Type: processor_split_log_regex
  13. SplitRegex: .*\d+:\d+:\d+.*
  14. SplitKey: content
  15. PreserveOthers: true
  16. - Type: processor_desensitize
  17. SourceKey: content
  18. Method: "const"
  19. Match: "regex"
  20. ReplaceString: "********"
  21. RegexBegin: "(密钥:|密钥为|加密后:)"
  22. RegexContent: "[^'|^\"]*"
  23. # 采集日志发送到kafka
  24. flushers:
  25. - Type: flusher_kafka_v2
  26. Brokers:
  27. - 192.168.6.242:9092
  28. Topic: tomcat-cronlog-logs

container_logs.yaml

  1. enable: true
  2. inputs:
  3. # 采集文件日志
  4. - Type: file_log
  5. LogPath: /mcloud/
  6. FilePattern: "*.log"
  7. # 采集目录深度
  8. MaxDepth: 5
  9. # 是否为容器日志
  10. ContainerFile: true
  11. processors:
  12. - Type: processor_split_log_regex
  13. SplitRegex: .*\d+:\d+:\d+.*
  14. SplitKey: content
  15. PreserveOthers: true
  16. - Type: processor_desensitize
  17. SourceKey: content
  18. Method: "const"
  19. Match: "regex"
  20. ReplaceString: "********"
  21. RegexBegin: "PASSWORD' => '"
  22. RegexContent: "[^'|^\"]*"
  23. - Type: processor_desensitize
  24. SourceKey: content
  25. Method: "const"
  26. Match: "regex"
  27. ReplaceString: "********"
  28. RegexBegin: "(password|PASSWORD). => "
  29. RegexContent: "[^'|^\"]*"
  30. - Type: processor_desensitize
  31. SourceKey: content
  32. Method: "const"
  33. Match: "regex"
  34. ReplaceString: "********"
  35. RegexBegin: "'password':'|\"password\":\""
  36. RegexContent: "[^'|^\"]*"
  37. - Type: processor_desensitize
  38. SourceKey: content
  39. Method: "const"
  40. Match: "regex"
  41. ReplaceString: "********"
  42. RegexBegin: "AccessKeyId: ['|\"]|AccessKeySecret: ['|\"]"
  43. RegexContent: "[^'|^\"]*"
  44. - Type: processor_json
  45. SourceKey: content
  46. # 保留原始字段
  47. KeepSource: false
  48. # JSON展开的深度
  49. ExpandDepth: 3
  50. # 展开时的连接符
  51. ExpandConnector: "_"
  52. # 是否将原始字段名作为前缀
  53. #UseSourceKeyAsPrefix: true
  54. # 采集日志发送到kafka
  55. flushers:
  56. - Type: flusher_kafka_v2
  57. Brokers:
  58. - 192.168.6.242:9092
  59. Topic: prod-csp-logs

 container_stdout.yaml

  1. enable: true
  2. inputs:
  3. - Type: service_docker_stdout
  4. Stderr: true
  5. Stdout: true
  6. BeginLineRegex: ".*\\d+:\\d+:\\d+.*"
  7. # 采集日志发送到kafka
  8. flushers:
  9. - Type: flusher_kafka_v2
  10. Brokers:
  11. - 192.168.6.242:9092
  12. Topic: container-stdout-logs

以上采集日志均推送到了192.168.6.242:9092,

三、部署kafka

安装java环境

官网:(Java Archive Downloads - Java SE 8u211 and later

以下载jdk-8u391-linux-x64.tar.gz为例,并上传至服务器

  1. mkdir -p /usr/local/java
  2. tar xf jdk-8u391-linux-x64.tar.gz -C /usr/local/java/
  3. #在/etc/profile中添加以下内容
  4. export JAVA_HOME=/usr/local/java/jdk1.8.0_391
  5. export JRE_HOME=${JAVA_HOME}/jre
  6. export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
  7. export PATH=${JAVA_HOME}/bin:$PATH
  8. #立即生效
  9. source /etc/profile

通过打印Java 版本验证 Java 安装校验:

java -version

下载Kafka版本

本文下载的是kafka_2.12-3.5.1.tgz版本,可以使用wget下载,也可以自行下载。 下载地址:http://kafka.apache.org/downloads

curl -LO https://downloads.apache.org/kafka/3.5.1/kafka_2.12-3.5.1.tgz

安装和配置

1、将包下载到相关的目录,然后解压Zookeeper到指定目录;

  1. cd /opt/
  2. tar xf kafka_2.12-3.5.1.tgz

2、修改kafka配置文件;(确保log.dirs目录存在)

  1. vim config/server.properties
  2. #修改以下两行
  3. #kafka监听地址
  4. listeners=PLAINTEXT://192.168.6.242:9092
  5. #指定kafka存放日志路径
  6. log.dirs=/elk/kafka-logs

 3、修改zookeeper配置文件;(确保dataDir目录存在)

  1. vim config/zookeeper.properties
  2. dataDir=/elk/zookeeper
  3. clientPort=2181
  4. maxClientCnxns=0
  5. admin.enableServer=false

4、启动zookeeper;

nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties &

5、启动kafka;

 nohup bin/kafka-server-start.sh config/server.properties &

启动ilogtail yaml文件

kubectl apply -f ilogtail-daemonset.yaml

  • 检查pod 启动情况并去 kafka 验证日志
kubectl get pod -n ilogtail
  1. #查看/elk/kafka-logs生成日志信息
  2. cd /elk/kafka-logs

四、logstash部署

  1. vim /etc/yum.repos.d/logstash.repo
  2. [logstash-7.x]
  3. name=Elastic repository for 7.x packages
  4. baseurl=https://artifacts.elastic.co/packages/7.x/yum
  5. gpgcheck=1
  6. gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
  7. enabled=1
  8. autorefresh=1
  9. type=rpm-md
  1. yum install -y logstash
  2. cd /etc/logstash/conf.d

#将logstash采集文件放置/etc/logstash/conf.d目录

logstash-nginxaccess.conf

  1. input {
  2. kafka {
  3. bootstrap_servers => ["192.168.6.242:9092"]
  4. client_id => "test5"
  5. group_id => "nginxaccesslogs"
  6. auto_offset_reset => "latest"
  7. consumer_threads => 5
  8. decorate_events => true
  9. topics => ["nginx-access-logs"]
  10. type => "nginxaccess"
  11. }
  12. }
  13. filter{
  14. if [message] =~ /\/health-check/ {
  15. drop {}
  16. }
  17. if [message] =~ /\/check-status/ {
  18. drop{}
  19. }
  20. if [message] =~ /\/nginx_status/ {
  21. drop{}
  22. }
  23. if [message] =~ /\/checkstatus/ {
  24. drop{}
  25. }
  26. json {
  27. # 将message作为解析json的字段
  28. source => "message"
  29. remove_field => ["message"]
  30. }
  31. }
  32. output {
  33. elasticsearch {
  34. hosts => ["http://192.168.6.242:9200","http://192.168.6.170:9200","http://192.168.7.167:9200"]
  35. index => "nginx-access-logs"
  36. }
  37. }

 logstash-nginxerr.conf

  1. input {
  2. kafka {
  3. bootstrap_servers => ["192.168.6.242:9092"]
  4. client_id => "test6"
  5. group_id => "nginxerrorlogs"
  6. auto_offset_reset => "latest"
  7. consumer_threads => 5
  8. decorate_events => true
  9. topics => ["nginx-error-logs"]
  10. type => "nginxerror"
  11. }
  12. }
  13. filter{
  14. if [message] =~ /\/status/ {
  15. drop {}
  16. }
  17. if [message] =~ /\/nginx_status/ {
  18. drop {}
  19. }
  20. if [message] =~ /\/check-status/ {
  21. drop {}
  22. }
  23. if [message] =~ /check-health/ {
  24. drop {}
  25. }
  26. json {
  27. # 将message作为解析json的字段
  28. source => "message"
  29. remove_field => ['message']
  30. }
  31. }
  32. output {
  33. elasticsearch {
  34. hosts => ["http://192.168.6.242:9200","http://192.168.6.170:9200","http://192.168.7.167:9200"]
  35. index => "nginx-error-logs"
  36. }
  37. }

 logstash-tomcataccess.conf

  1. input {
  2. kafka {
  3. bootstrap_servers => ["192.168.6.242:9092"]
  4. client_id => "test7"
  5. group_id => "tomcataccesslogs"
  6. auto_offset_reset => "latest"
  7. consumer_threads => 5
  8. decorate_events => true
  9. topics => ["tomcat-access-logs"]
  10. type => "tomcat"
  11. }
  12. }
  13. filter{
  14. if [message] =~ /\/Healthcheck/ {
  15. drop {}
  16. }
  17. if [message] =~ /\/healthcheck/ {
  18. drop {}
  19. }
  20. if [message] =~ /\/healthCheck/ {
  21. drop {}
  22. }
  23. if [message] =~ /check-health/ {
  24. drop {}
  25. }
  26. json {
  27. # 将message作为解析json的字段
  28. source => "message"
  29. remove_field => ['message']
  30. remove_field => ['fields']
  31. }
  32. }
  33. output {
  34. elasticsearch {
  35. hosts => ["http://192.168.6.242:9200","http://192.168.6.170:9200","http://192.168.7.167:9200"]
  36. index => "tomcat-access-logs"
  37. }
  38. }

logstash-tomcatcronlog.conf

  1. input {
  2. kafka {
  3. bootstrap_servers => ["192.168.6.242:9092"]
  4. client_id => "test8"
  5. group_id => "tomcatcronlogs"
  6. auto_offset_reset => "latest"
  7. consumer_threads => 5
  8. decorate_events => true
  9. topics => ["tomcat-cronlog-logs"]
  10. type => "tomcat"
  11. }
  12. }
  13. filter{
  14. if [message] =~ /\/Healthcheck/ {
  15. drop {}
  16. }
  17. if [message] =~ /\/healthcheck/ {
  18. drop {}
  19. }
  20. if [message] =~ /\/healthCheck/ {
  21. drop {}
  22. }
  23. if [message] =~ /check-health/ {
  24. drop {}
  25. }
  26. json {
  27. # 将message作为解析json的字段
  28. source => "message"
  29. remove_field => ['message']
  30. remove_field => ['fields']
  31. }
  32. }
  33. output {
  34. elasticsearch {
  35. hosts => ["http://192.168.6.242:9200","http://192.168.6.170:9200","http://192.168.7.167:9200"]
  36. index => "tomcat-cronlog-logs"
  37. }
  38. }

启动logstash

systemctl start logstash

logstash日志目录/var/log/logstash/

五、es集群部署

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  1. tee /etc/yum.repos.d/elasticsearch.repo <<-'EOF'
  2. [elasticsearch-7.x]
  3. name=Elasticsearch repository for 7.x packages
  4. baseurl=https://artifacts.elastic.co/packages/7.x/yum
  5. gpgcheck=1
  6. gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
  7. enabled=1
  8. autorefresh=1
  9. type=rpm-md
  10. EOF

使用YUM源进行安装

yum install -y elasticsearch-7.17.6

修改配置文件:(/etc/elasticsearch/elasticsearch.yml)

  1. #集群名称,三台节点必须一样
  2. cluster.name: elasticsearch
  3. #节点名称,三台节点必须都不一样
  4. node.name: master
  5. #是否有资格被选举为主节点
  6. node.master: true
  7. #是否存储索引数据
  8. node.data: true
  9. #数据存储位置
  10. path.data: /elk/elasticsearch
  11. #日志存储位置
  12. path.logs: /var/log/elasticsearch
  13. #设置绑定的ip,也是与其他节点交互的ip
  14. network.host: 192.168.6.242
  15. #http访问端口
  16. http.port: 9200
  17. #节点之间交互的端口号
  18. transport.tcp.port: 9300
  19. #是否支持跨域
  20. http.cors.enabled: true
  21. #当设置允许跨域,默认为*,表示支持所有域名
  22. http.cors.allow-origin: "*"
  23. #集群中master节点的初始列表
  24. discovery.zen.ping.unicast.hosts: ["192.168.6.242:9300","192.168.6.170:9300","192.168.7.167:9300"]
  25. #设置几台符合主节点条件的节点为主节点以初始化集群(低版本不适用此配置项,es默认会把第一个加入集群的服务器设置为master)
  26. cluster.initial_master_nodes: ["master"]
  27. discovery.zen.minimum_master_nodes: 2 # 为了避免脑裂,集群节点数最少为 半数+1

将/etc/elasticsearch/elasticsearch.yml拷贝至另外两台节点,并修改node.name、network.host、discovery.zen.ping.unicast.hosts参数,path.data及path.logs可自定义数据和日志存储位置

安装IK分词器插件

由于elastic官方未提供ik分词插件,需下载插件进行安装。(适配es7.17.6版本)

链接:https://pan.baidu.com/s/1_RGAzctJk17yJjHOb4OEJw?pwd=to96 
提取码:to96

/usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/elasticsearch-analysis-ik-7.17.6.zip

 将elasticsearch加入开机自启动,并立即启动。

systemctl enable elasticsearch.service --now

六、kibana部署

下载地址:Download Kibana Free | Get Started Now | Elastic

  1. #解压kibana软件包,可使用-C自定义解压路径
  2. tar xf kibana-7.17.6-linux-x86_64.tar.gz
  3. cd kibana-7.17.6-linux-x86_64/config
  4. vim kibana.yml
  1. # Kibana 访问地址
  2. server.host: 0.0.0.0
  3. # elasticsearch集群列表
  4. elasticsearch.hosts: ["http://***:9200","http://***:9200","http://***:9200"]
  1. #后台启动
  2. nohup ./bin/kibana --allow-root &
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Guff_9hys/article/detail/891537
推荐阅读
相关标签
  

闽ICP备14008679号