赞
踩
本次部署将使用ElasticSearch官方的镜像和Docker-Compose来创建单节点的ELK,用于学习ELK操作。在k8s集群内,如果每天的日志量超过20G以上,建议部署在k8s集群外部,以支持分布式集群的架构。在这种情况下,我们将采用有状态部署的方式,并且使用动态存储进行持久化。在运行该yaml文件之前,需要提前创建好存储类。本文档将仅使用常用的ElasticSearch + LogStash + Kibana组件。
/docker/
├── elk
│ ├── docker-compose.yml
│ ├── elasticsearch
│ ├── kibana
│ │ └── config
│ │ └── kibana.yml
│ └── logstash
│ ├── config
│ │ └── logstash.yml
│ └── pipeline
│ └── logstash.conf
version: '3' services: elasticsearch: image: elasticsearch:7.17.2 container_name: elasticsearch ports: - "9200:9200" - "9300:9300" restart: always environment: # 设置集群名称 cluster.name: elasticsearch # 以单一节点模式启动 discovery.type: single-node ES_JAVA_OPTS: "-Xms512m -Xmx512m" volumes: - /docker/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins - /docker/elk/elasticsearch/data:/usr/share/elasticsearch/data - /docker/elk/elasticsearch/logs:/usr/share/elasticsearch/logs networks: - elk logstash: image: logstash:7.17.2 container_name: logstash restart: always ports: - "4560:4560" volumes: - /docker/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf - /docker/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml depends_on: - elasticsearch links: #可以用es这个域名访问elasticsearch服务 - elasticsearch:es networks: - elk kibana: image: kibana:7.17.2 container_name: kibana restart: always ports: - "5601:5601" depends_on: # kibana在elasticsearch启动之后再启动 - elasticsearch environment: #设置系统语言文中文 I18N_LOCALE: zh-CN # 访问域名 # SERVER_PUBLICBASEURL: https://kibana.cloud.com volumes: - /docker/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml links: #可以用es这个域名访问elasticsearch服务 - elasticsearch:es networks: - elk networks: elk: name: elk driver: bridge
在 Services 中声明了三个服务:
创建 kibana.yml 文件
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
创建服务主体配置 logstash.yml 文件
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
传输管道配置 logstash.conf
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "%{[spring.application.name]}-%{+YYYY.MM.dd}"
}
}
docker-compose 启动
docker-compose up -d elasticsearch kibana logstash
访问 9200 端口查看 elasticsearch 是否启动成功
访问 5601 端口查看 Kibana 容器是否启动
<!-- logstash -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.2</version>
</dependency>
logback.xml
文件 增加 logstash
配置<!-- logstash -->
<springProperty scope="context" name="appName" source="spring.application.name"/>
<!--输出到logstash的appender-->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--可以访问的logstash日志收集端口-->
<destination>${logstash.host:logstash.port}</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"spring.application.name":"${appName}"}</customFields>
</encoder>
</appender>
<root level="info">
<appender-ref ref="logstash"/>
</root>
这里使用通配符 整合查看所有 *-*
开头的索引
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。