赞
踩
使用一台linuxdocker compose部署一主一从ELK
需要准备的文件如下
上述文件的存放位置为/www 具体位置见下图 [root@elk107 www]# tree . ├── docker-elk.yml ├── docker-filebeat.yml ├── elasticsearch │ ├── master │ │ ├── conf │ │ │ └── es-master.yml │ └── slave1 │ ├── conf │ │ └── es-slave1.yml ├── kibana │ └── conf │ └── kibana.yml ├── logstash │ └── conf │ └── logstash-filebeat.conf /opt/filebeat/conf/filebeat.yml
===========================================
docker-elk.yml
version: "3" services: es-master: container_name: es-master hostname: es-master image: elasticsearch:7.1.1 restart: always user: root ports: - 9200:9200 - 9300:9300 volumes: - /www/elasticsearch/master/conf/es-master.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /www/elasticsearch/master/data:/usr/share/elasticsearch/data - /www/elasticsearch/master/logs:/usr/share/elasticsearch/logs environment: - "TAKE_FILE_OWNERSHIP=true" - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "TZ=Asia/Shanghai" es-slave1: container_name: es-slave1 image: elasticsearch:7.1.1 restart: always ports: - 9201:9200 - 9301:9300 volumes: - /www/elasticsearch/slave1/conf/es-slave1.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /www/elasticsearch/slave1/data:/usr/share/elasticsearch/data - /www/elasticsearch/slave1/logs:/usr/share/elasticsearch/logs environment: - "TAKE_FILE_OWNERSHIP=true" - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "TZ=Asia/Shanghai" kibana: container_name: kibana hostname: kibana image: kibana:7.1.1 restart: always ports: - 5601:5601 volumes: - /www/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml environment: - elasticsearch.hosts=http://es-master:9200 - "TZ=Asia/Shanghai" depends_on: - es-master - es-slave1 logstash: container_name: logstash hostname: logstash image: logstash:7.1.1 command: logstash -f ./conf/logstash-filebeat.conf restart: always volumes: # 映射到容器中 - /www/logstash/conf/logstash-filebeat.conf:/usr/share/logstash/conf/logstash-filebeat.conf environment: - elasticsearch.hosts=http://es-master:9200 # 解决logstash监控连接报错 - xpack.monitoring.elasticsearch.hosts=http://es-master:9200 - "TZ=Asia/Shanghai" ports: - 5044:5044 depends_on: - es-master - es-slave1
docker-filebeat.yml
version: "3" services: filebeat: # 容器名称 container_name: filebeat # 主机名称 hostname: filebeat # 镜像 image: docker.elastic.co/beats/filebeat:7.0.1 # 重启机制 restart: always # 启动用户 user: root # 持久化挂载 volumes: # 映射到容器中[作为数据源] - /www/mua/runtime/log:/www/mua/runtime/log - /www/wx/runtime/log:/www/wx/runtime/log - /www/supplyChain/runtime/log:/www/supplyChain/runtime/log # 方便查看数据及日志(可不映射) - /opt/filebeat/logs:/usr/share/filebeat/logs - /opt/filebeat/data:/usr/share/filebeat/data # 映射配置文件到容器中 - /opt/filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml # 使用主机网络模式 network_mode: host
es-master.yml
# 集群名称 cluster.name: es-cluster # 节点名称 node.name: es-master # 是否可以成为master节点 node.master: true # 是否允许该节点存储数据,默认开启 node.data: true # 网络绑定 network.host: 0.0.0.0 # 设置对外服务的http端口 http.port: 9200 # 设置节点间交互的tcp端口 transport.port: 9300 # 集群发现 discovery.seed_hosts: - es-master - es-slave1 - es-slave2 # 手动指定可以成为 mater 的所有节点的 name 或者 ip,这些配置将会在第一次选举中进行计算 cluster.initial_master_nodes: - es-master # 支持跨域访问 http.cors.enabled: true http.cors.allow-origin: "*" # 安全认证 xpack.security.enabled: false #http.cors.allow-headers: "Authorization"
es-slave1.yml
# 集群名称 cluster.name: es-cluster # 节点名称 node.name: es-slave1 # 是否可以成为master节点 node.master: true # 是否允许该节点存储数据,默认开启 node.data: true # 网络绑定 network.host: 0.0.0.0 # 设置对外服务的http端口 http.port: 9201 # 设置节点间交互的tcp端口 #transport.port: 9301 # 集群发现 discovery.seed_hosts: - es-master - es-slave1 - es-slave2 # 手动指定可以成为 mater 的所有节点的 name 或者 ip,这些配置将会在第一次选举中进行计算 cluster.initial_master_nodes: - es-master # 支持跨域访问 http.cors.enabled: true http.cors.allow-origin: "*" # 安全认证 xpack.security.enabled: false #http.cors.allow-headers: "Authorization"
filebeat.yml
filebeat.inputs: - type: log enabled: true paths: - /www/mua/runtime/log/*[0-9][0-9].log - /www/mua/runtime/log/*_cli.log multiline: pattern: '[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}' negate: true match: after fields: log_topics: muats - type: log enabled: true paths: - /www/mua/runtime/log/*_info.log multiline: pattern: '[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}' negate: true match: after fields: log_topics: muats-info - type: log enabled: true paths: - /www/mua/runtime/log/*_error.log multiline: pattern: '[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}' negate: true match: after fields: log_topics: muats-error output.logstash: hosts: ["192.168.136.107:5044"]
kibana.yml
# 服务端口
server.port: 5601
# 服务IP
server.host: "0.0.0.0"
# ES
elasticsearch.hosts: ["http://es-master:9200"]
# 汉化
i18n.locale: "zh-CN"
logstash-filebeat.conf
input { beats { port => 5044 } } # 分析、过滤插件,可以多个 filter { grok { match => ["message", "%{TIMESTAMP_ISO8601:logdate}"] } date { match => ["logdate", "yyyy-MM-dd HH:mm:ss.SSS"] target => "@timestamp" } } output { elasticsearch { hosts => "http://es-master:9200" index => "%{[fields][log_topics]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
================================================================
cd /www
启动ELK
docker-compose -f docker-elk.yml up -d #安装服务
想要实时查看日志
docker-compose -f docker-elk.yml up #安装服务
访问ES KIBANA
http://192.168.136.107:5601/
=======================================================
启动fileBeat,监听指定的文件路径
监听路径为
- /www/mua/runtime/log/*[0-9][0-9].log
- /www/mua/runtime/log/*_cli.log
www文件夹下
执行命令启动
docker-compose -f docker-filebeat.yml up -d
或者
docker-compose -f docker-filebeat.yml up
启动成功后
filebeat会监听指定路径中的文件,一旦文件内容发生变化,就会将变化信息发往logstash筛选处理,处理后会存到ES。我们可以通过kibana从ES中进行数据分析,以可视化图标的方式展示出来。
=========================================================
修改被监听的文件,测试一下功能是否成功
cd /www/mua/runtime/log
vim test_cli.log
输入66666666666666666后保存退出
未保存之前的ELK和filebeat的状态
保存之后
太久不更新监听的文件,会导致失败后重试
然后去kibana查看
访问
http://192.168.136.107:5601/
点击管理-索引管理-能看到自动新增了一条索引
点击discovery,可以看到被监听文件的详细修改修改情况
再试一下
====================
springboot日志输出到ELK可视化分析**
springboot+filebeat+ELK
springboot项目生成日志,日志有filebeat监听,监听到变化后传给logstash过滤处理,再保存到ES,kibana读取es中日志数据显示为图形。
新建springboot项目,用一个循环生成测试数据
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.6.2</version> <relativePath /> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>demo-elk</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo-elk</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.8.1</version> </dependency> <dependency> <groupId>joda-time</groupId> <artifactId>joda-time</artifactId> <version>2.10.5</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project>
package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.RandomUtils; import org.joda.time.DateTime; @Slf4j @SpringBootApplication public class DemoElkApplication { public static final String[] VISIT = new String[] { "浏览页面", "评论商品", "加入收藏", "加入购物车", "提交订单", "使用优惠券", "领取优惠券", "搜索", "查看订单" }; public static void main(String[] args) throws InterruptedException { while (true) { Long sleep = RandomUtils.nextLong(200, 1000 * 5); Thread.sleep(sleep); Long maxUserId = 9999L; Long userId = RandomUtils.nextLong(1, maxUserId); String visit = VISIT[RandomUtils.nextInt(0, VISIT.length)]; DateTime now = new DateTime(); int maxHour = now.getHourOfDay(); int maxMillis = now.getMinuteOfHour(); int maxSeconds = now.getSecondOfMinute(); String date = now.plusHours(-(RandomUtils.nextInt(0, maxHour))) .plusMinutes(-(RandomUtils.nextInt(0, maxMillis))) .plusSeconds(-(RandomUtils.nextInt(0, maxSeconds))).toString("yyyy-MM-dd HH:mm:ss"); String result = "DAU|" + userId + "|" + visit + "|" + date; log.error(result); } } }
打包成jar包,上传到linux
安装JDK
yum install java-1.8.0-openjdk*
vi /etc/profile
#set java environment
JAVA_HOME=/usr/lib/jvm/java
JRE_HOME=$JAVA_HOME/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH
查看是否安装成功
java -version
执行jar包,并将日志输出到filebeat监听的文件内
cd /www/mua/runtime/log
java -Xms256m -Xmx512m -jar demo-elk-0.0.1-SNAPSHOT.jar > app11.log 2>&1 &
修改下logstash的配置文件,即修改logstash过滤的规则,比如将filebeat传过来的字符串分割等
关闭ELK后。修改成如下,意思是将原本的message按照 | 符号分割成数组,然后将数组的123号位重建为字段 userId visit date 。
比如
原本的message为
message:20:59:22.482 [main] ERROR com.example.demo.DemoElkApplication - DAU|6944|查看订单|2022-01-09 01:01:10
切割后会成为数组[DAU,6944,查看订单,2022-01-09 01:01:10 ]
vim /www/logstash/conf/logstash-filebeat.conf
input { beats { port => 5044 } } # 分析、过滤插件,可以多个 filter { grok { match => ["message", "%{TIMESTAMP_ISO8601:logdate}"] } date { match => ["logdate", "yyyy-MM-dd HH:mm:ss.SSS"] target => "@timestamp" } mutate { split => {"message"=>"|"} } mutate { add_field => { "userId" => "%{[message][1]}" "visit" => "%{[message][2]}" "date" => "%{[message][3]}" } } mutate { convert => { "userId" => "integer" "visit" => "string" "date" => "string" } } } output { elasticsearch { hosts => "http://es-master:9200" index => "%{[fields][log_topics]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
再启动ELK
docker-compose -f docker-filebeat.yml up
重启filebeat
ctrl +c
docker-compose -f docker-filebeat.yml up
对比下两次的输出有什么区别?将原本一个整体的message拆分了,将用户ID和操作名称取了出来形成了单独的字段
字段一多,就可以根据这些字段对应的数据用kibana制作图标,比如可以通过直方图查看访问人数随时间的变化。
创建直方图
比如可以以饼图的形式展示不同时间阶段、执行不同操作的次数。对应就是过滤后的visit字段
新建搜索discovery
将上述直方图、饼图、搜索,组装成一个仪表盘dashboard
==========================
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。