当前位置:   article > 正文

ELK7.2.0部署-搭建分布式日志平台-log4j自定义日志级别发送日志到LogStash_elk日志级别选择

elk日志级别选择

一、前言

1、ELK简介

ELK是Elasticsearch+Logstash+Kibana的简称

  • ElasticSearch是一个基于Lucene的分布式全文搜索引擎,提供 RESTful API进行数据读写

  • Logstash是一个收集,处理和转发事件和日志消息的工具

  • Kibana是Elasticsearch的开源数据可视化插件,为查看存储在ElasticSearch提供了友好的Web界面,并提供了条形图,线条和散点图,饼图和地图等分析工具

总的来说,ElasticSearch负责存储数据,Logstash负责收集日志,并将日志格式化后写入ElasticSearch,Kibana提供可视化访问ElasticSearch数据的功能。

2、ELK工作流

应用将日志按照约定的Key写入Redis,Logstash从Redis中读取日志信息写入ElasticSearch集群。Kibana读取ElasticSearch中的日志,并在Web页面中以表格/图表的形式展示。

二、准备工作

1、服务器&软件环境说明

  • 服务器

一共准备3台Ubuntu18.04 Server

服务器名IP说明
es1192.168.1.69部署ElasticSearch主节点
es2192.168.1.70部署ElasticSearch从节点
elk192.168.1.71部署Logstash + Kibana

这里为了节省,只部署2台Elasticsearch,并将Logstash + Kibana + Redis部署在了一台机器上。
如果在生产环境部署,可以按照自己的需求调整。

  • 软件环境
说明
Linux ServerUbuntu 18.04
Elasticsearch7.2.0
Logstash7.2.0
Kibana7.2.0
JDK11.0.2

2、ELK环境准备

由于Elasticsearch、Logstash、Kibana均不能以root账号运行。
但是Linux对非root账号可并发操作的文件、线程都有限制。
所以,部署ELK相关的机器都要调整:

  • 修改文件以及进程数限制以及最大并发数限制
  1. # 修改系统文件
  2. sudo vim /etc/security/limits.conf
  3. #增加的内容
  4. * soft nofile 65536
  5. * hard nofile 65536
  6. * soft nproc 4096
  7. * hard nproc 4096
  • 修改虚拟内存大小限制
  1. sudo vim /etc/sysctl.conf
  2. #添加下面配置
  3. vm.max_map_count=655360

以上操作重启系统后生效(各系统的限制不一样,不同的硬件环境不同的软件版本都会不一样,请根据elasticsearch启动报错来精确设置)

sudo reboot now
  • 下载ELK包并解压

https://www.elastic.co/cn/downloads

  1. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
  2. wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
  3. wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz

三、Elasticsearch 安装部署

本次一共要部署两个Elasticsearch节点,所有文中没有指定机器的操作都表示每个Elasticsearch机器都要执行该操作

1、准备工作

  • 解压
tar -zxvf elasticsearch-7.2.0-linux-x86_64.tar.gz

2、Elasticsearch 配置

  • 修改配置
vim config/elasticsearch.yml
  • 主节点配置(192.168.1.31)
  1. cluster.name: es
  2. node.name: es1
  3. path.data: /home/rock/elasticsearch-7.2.0/data
  4. path.logs: /home/rock/elasticsearch-7.2.0/logs
  5. network.host: 192.168.1.69
  6. http.port: 9200
  7. transport.tcp.port: 9300
  8. node.master: true
  9. node.data: true
  10. discovery.zen.ping.unicast.hosts: ["192.168.1.69:9300","192.168.1.70:9300"]
  11. discovery.zen.minimum_master_nodes: 1
  • 从节点配置(192.168.1.32)
  1. cluster.name: es
  2. node.name: es2
  3. path.data: /home/rock/elasticsearch-7.2.0/data
  4. path.logs: /home/rock/elasticsearch-7.2.0/logs
  5. network.host: 192.168.1.70
  6. http.port: 9200
  7. transport.tcp.port: 9300
  8. node.master: false
  9. node.data: true
  10. discovery.zen.ping.unicast.hosts: ["192.168.1.69:9300","192.168.1.70:9300"]
  11. discovery.zen.minimum_master_nodes: 1
  • 配置项说明
说明
cluster.name集群名
node.name节点名
path.data数据保存目录
path.logs日志保存目录
network.host节点host/ip
http.portHTTP访问端口
transport.tcp.portTCP传输端口
node.master是否允许作为主节点
node.data是否保存数据
discovery.zen.ping.unicast.hosts集群中的主节点的初始列表,当节点(主节点或者数据节点)启动时使用这个列表进行探测
discovery.zen.minimum_master_nodes主节点个数

2、Elasticsearch启动&健康检查

  • 启动
  1. #进入elasticsearch根目录
  2. cd /home/rock/elasticsearch-7.2.0
  3. #启动
  4. ./bin/elasticsearch
  • 查看健康状态

  1. {
  2. "cluster_name": "es",
  3. "status": "green",
  4. "timed_out": false,
  5. "number_of_nodes": 2,
  6. "number_of_data_nodes": 2,
  7. "active_primary_shards": 0,
  8. "active_shards": 0,
  9. "relocating_shards": 0,
  10. "initializing_shards": 0,
  11. "unassigned_shards": 0,
  12. "delayed_unassigned_shards": 0,
  13. "number_of_pending_tasks": 0,
  14. "number_of_in_flight_fetch": 0,
  15. "task_max_waiting_in_queue_millis": 0,
  16. "active_shards_percent_as_number": 100
  17. }

如果返回status=green表示正常

四、Logstash 配置

  • 配置数据&日志目录&主目录&jvm启动项&pipelines
  1.  #打开目录
  2. cd /home/rock/logstash-7.2.0
  3. #修改配置
  4. vim config/startup.options
  5. #修改为以下内容
  6. LS_HOME=/home/rock/logstash-7.2.0
  7. LS_SETTINGS_DIR=/home/rock/logstash-7.2.0/config
  8. #修改配置
  9. vim config/logstash.yml
  10. #增加以下内容
  11. path.data: /home/rock/logstash-7.2.0/data
  12. path.logs: /home/rock/logstash-7.2.0/logs
  13. #修改配置
  14. vim config/jvm.options
  15. #修改如下配置
  16. -Xms512m #修改成最适合您的配置
  17. -Xmx512m #修改成最合适您的配置
  18. #修改垃圾收集器策略 默认为CMS,改用G1
  19. #-XX:+UseConcMarkSweepGC
  20. #-XX:CMSInitiatingOccupancyFraction=75
  21. #-XX:+UseCMSInitiatingOccupancyOnly
  22. -XX:+UseG1GC
  23. #修改配置
  24. vim config/pipelines.yml
  25. #添加如下配置
  26. - pipeline.id: my_pipeline_name
  27. path.config: "/home/rock/logstash-7.2.0/config/logstash.conf"
  28. queue.type: persisted
  • 配置LogStash出入口
  1. #编辑文件
  2. vim config/logstash.conf
  3. #添加如下内容
  4. input {
  5. tcp {
  6. port => 12345
  7. codec => json
  8. }
  9. }
  10. filter {
  11. }
  12. output {
  13. elasticsearch {
  14. hosts => ["192.168.1.69:9200","192.168.1.70:9200"]
  15. index => "logstash-%{+YYYY.MM.dd}"
  16. }
  17. stdout {
  18. }
  19. }

 

五、Kibana 配置

修改配置

  1. #进入目录
  2. cd /home/rock/kibana-7.2.0-linux-x86_64
  3. #配置信息
  4. vim config/kibana.yml
  5. #增加如下信息
  6. server.port: 5601
  7. server.host: "192.168.1.71"
  8. elasticsearch.hosts: ["http://192.168.1.69:9200","http://192.168.1.70:9200"]
  • 启动
bin/kibana

浏览器访问: 192.168.1.71:5601

 

六、log4j部署

配置文件log4j.xml。自定义日志级别。自定义json解析器。

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
  3. <!--Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时,你会看到log4j2内部各种详细输出 -->
  4. <!--monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数 -->
  5. <!--packages 为自定义layout和appender的扫描目录-->
  6. <configuration status="WARN" monitorInterval="30" packages="com.test.rock.log">
  7. <properties>
  8. <Property name="PROJECT_NAME">my-project</Property>
  9. <Property name="ELK_LOG_PATTERN">%m</Property>
  10. </properties>
  11. <CustomLevels>
  12. <!--注意 : intLevel 值越小,级别越高 (log4j2 官方文档)-->
  13. <CustomLevel name="CUSTOMER" intLevel="1" />
  14. </CustomLevels>
  15. <!--先定义所有的appender -->
  16. <appenders>
  17. <!--这个输出控制台的配置 -->
  18. <console name="Console" target="SYSTEM_OUT">
  19. <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY" />
  20. <!--输出日志的格式 -->
  21. <PatternLayout pattern="ROCK-%d{HH:mm:ss.SSS} %-5level - %msg%n" />
  22. <!-- <JsonLayout compact="true" eventEol="true" />-->
  23. </console>
  24. <Socket name="Socket" host="192.168.1.71" port="12345">
  25. <ThresholdFilter level="CUSTOMER" onMatch="ACCEPT" onMismatch="DENY" />
  26. <!-- <JsonLayout compact="true" eventEol="true" />-->
  27. <ElkJsonPatternLayout pattern="${ELK_LOG_PATTERN}" projectName="${PROJECT_NAME}"/>
  28. </Socket>
  29. <!-- 这个就是自定义的Appender -->
  30. <ThriftAppender name="Thrift" host="127.0.0.1">
  31. <ThresholdFilter level="CUSTOMER" onMatch="ACCEPT" onMismatch="DENY" />
  32. <ElkJsonPatternLayout pattern="${ELK_LOG_PATTERN}" projectName="${PROJECT_NAME}"/>
  33. </ThriftAppender>
  34. </appenders>
  35. <!--然后定义logger,只有定义了logger并引入的appender,appender才会生效 -->
  36. <loggers>
  37. <root level="all">
  38. <appender-ref ref="Console" />
  39. <appender-ref ref="Socket"/>
  40. <appender-ref ref="Thrift"/>
  41. </root>
  42. </loggers>
  43. </configuration>

创建log4j自定义的json pattern

  1. package com.test.rock.log;
  2. import com.fasterxml.jackson.core.JsonProcessingException;
  3. import com.fasterxml.jackson.databind.ObjectMapper;
  4. import org.apache.commons.lang3.time.DateFormatUtils;
  5. import org.apache.logging.log4j.core.Layout;
  6. import org.apache.logging.log4j.core.LogEvent;
  7. import org.apache.logging.log4j.core.config.Configuration;
  8. import org.apache.logging.log4j.core.config.Node;
  9. import org.apache.logging.log4j.core.config.plugins.*;
  10. import org.apache.logging.log4j.core.layout.AbstractStringLayout;
  11. import org.apache.logging.log4j.core.layout.PatternLayout;
  12. import org.apache.logging.log4j.core.layout.PatternSelector;
  13. import org.apache.logging.log4j.core.pattern.RegexReplacement;
  14. import java.io.File;
  15. import java.lang.management.ManagementFactory;
  16. import java.lang.management.RuntimeMXBean;
  17. import java.nio.charset.Charset;
  18. @Plugin(name = "ElkJsonPatternLayout", category = Node.CATEGORY, elementType = Layout.ELEMENT_TYPE, printObject = true)
  19. public class ElkJsonPatternLayout extends AbstractStringLayout {
  20. /** 项目路径 */
  21. private static String PROJECT_PATH;
  22. private PatternLayout patternLayout;
  23. private String projectName;
  24. static {
  25. PROJECT_PATH = new File("").getAbsolutePath();
  26. }
  27. private ElkJsonPatternLayout(Configuration config, RegexReplacement replace, String eventPattern,
  28. PatternSelector patternSelector, Charset charset, boolean alwaysWriteExceptions,
  29. boolean noConsoleNoAnsi, String headerPattern, String footerPattern, String projectName) {
  30. super(config, charset,
  31. PatternLayout.createSerializer(config, replace, headerPattern, null, patternSelector, alwaysWriteExceptions,
  32. noConsoleNoAnsi),
  33. PatternLayout.createSerializer(config, replace, footerPattern, null, patternSelector, alwaysWriteExceptions,
  34. noConsoleNoAnsi));
  35. this.projectName = projectName;
  36. this.patternLayout = PatternLayout.newBuilder()
  37. .withPattern(eventPattern)
  38. .withPatternSelector(patternSelector)
  39. .withConfiguration(config)
  40. .withRegexReplacement(replace)
  41. .withCharset(charset)
  42. .withAlwaysWriteExceptions(alwaysWriteExceptions)
  43. .withNoConsoleNoAnsi(noConsoleNoAnsi)
  44. .withHeader(headerPattern)
  45. .withFooter(footerPattern)
  46. .build();
  47. }
  48. @Override
  49. public String toSerializable(LogEvent event) {
  50. //在这里处理日志内容
  51. String message = patternLayout.toSerializable(event);
  52. String jsonStr = new JsonLoggerInfo(projectName, message, event.getLevel().name(), event.getLoggerName(), Thread.currentThread().getName(), event.getTimeMillis()).toString();
  53. return jsonStr + "\n";
  54. }
  55. @PluginFactory
  56. public static ElkJsonPatternLayout createLayout(
  57. @PluginAttribute(value = "pattern", defaultString = PatternLayout.DEFAULT_CONVERSION_PATTERN) final String pattern,
  58. @PluginElement("PatternSelector") final PatternSelector patternSelector,
  59. @PluginConfiguration final Configuration config,
  60. @PluginElement("Replace") final RegexReplacement replace,
  61. // LOG4J2-783 use platform default by default, so do not specify defaultString for charset
  62. @PluginAttribute(value = "charset") final Charset charset,
  63. @PluginAttribute(value = "alwaysWriteExceptions", defaultBoolean = true) final boolean alwaysWriteExceptions,
  64. @PluginAttribute(value = "noConsoleNoAnsi", defaultBoolean = false) final boolean noConsoleNoAnsi,
  65. @PluginAttribute("header") final String headerPattern,
  66. @PluginAttribute("footer") final String footerPattern,
  67. @PluginAttribute("projectName") final String projectName) {
  68. return new ElkJsonPatternLayout(config, replace, pattern, patternSelector, charset,
  69. alwaysWriteExceptions, noConsoleNoAnsi, headerPattern, footerPattern, projectName);
  70. }
  71. public static String getProcessID() {
  72. RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
  73. String name = runtime.getName(); // format: "pid@hostname"
  74. try {
  75. return name.substring(0, name.indexOf('@'));
  76. } catch (Exception e) {
  77. return null;
  78. }
  79. }
  80. public static String getThreadID(){
  81. return "" + (int)(1+Math.random()*10000);
  82. // return "" + Thread.currentThread().getId();
  83. }
  84. /**
  85. * 输出的日志内容
  86. */
  87. public static class JsonLoggerInfo{
  88. /** 项目名 */
  89. private String projectName;
  90. /** 当前进程ID */
  91. private String pid;
  92. /** 当前线程ID */
  93. private String tid;
  94. /** 当前线程名 */
  95. private String tidname;
  96. /** 日志信息 */
  97. private String message;
  98. /** 日志级别 */
  99. private String level;
  100. /** 日志分类 */
  101. private String topic;
  102. /** 日志时间 */
  103. private String time;
  104. public JsonLoggerInfo(String projectName, String message, String level, String topic, String tidname, long timeMillis) {
  105. this.projectName = projectName;
  106. this.pid = getProcessID();
  107. this.tid = getThreadID();
  108. this.tidname = tidname;
  109. this.message = message;
  110. this.level = level;
  111. this.topic = topic;
  112. this.time = DateFormatUtils.format(timeMillis, "yyyy-MM-dd HH:mm:ss.SSS");
  113. }
  114. public String getProjectName() {
  115. return projectName;
  116. }
  117. public String getPid() {
  118. return pid;
  119. }
  120. public String getTid() {
  121. return tid;
  122. }
  123. public String getTidname() {
  124. return tidname;
  125. }
  126. public String getMessage() {
  127. return message;
  128. }
  129. public String getLevel() {
  130. return level;
  131. }
  132. public String getTopic() {
  133. return topic;
  134. }
  135. public String getTime() {
  136. return time;
  137. }
  138. @Override
  139. public String toString() {
  140. try {
  141. return new ObjectMapper().writeValueAsString(this);
  142. } catch (JsonProcessingException e) {
  143. e.printStackTrace();
  144. }
  145. return null;
  146. }
  147. }
  148. }

测试main方法

  1. public static void main(String[] args) throws Exception{
  2. log.info("abc abc abc");
  3. for (int i = 0; i < 1000000; i++) {
  4. log.log(Level.toLevel("CUSTOMER"), "hahahahaha");
  5. try{
  6. Thread.sleep(1*1000);
  7. } catch(Exception e){
  8. e.printStackTrace();
  9. }
  10. }
  11. }

七、Kibana可视化图形界面,简单查看

成功发送日志以后就是需要在Kibana上的可视化界面查看我们的日志数据的统计结果。

创建index pattern

查看索引结果

 

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/正经夜光杯/article/detail/828055
推荐阅读
相关标签
  

闽ICP备14008679号