当前位置:   article > 正文

docker部署elk(elasticsearch+logstash+kibana)并监控springboot日志_docker建立elk监控spring boot日志开发环境。

docker建立elk监控spring boot日志开发环境。

elasticsearch:

拉取镜像:

docker pull elasticsearch:6.8.10

单机运行:

docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 elasticsearch:6.8.10

你的ip:9200,可以访问则可以直接启动成功,但是如果类似出现以下错误

  1. [2019-10-27T14:38:59,356][INFO ][o.e.n.Node ] [kniXCrn] starting ...
  2. [2019-10-27T14:38:59,712][INFO ][o.e.t.TransportService ] [kniXCrn] publish_address {172.17.0.6:9300}, bound_addresses {[::]:9300}
  3. [2019-10-27T14:38:59,754][INFO ][o.e.b.BootstrapChecks ] [kniXCrn] bound or publishing to a non-loopback address, enforcing bootstrap checks
  4. ERROR: [1] bootstrap checks failed
  5. [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
  6. [2019-10-27T14:38:59,827][INFO ][o.e.n.Node ] [kniXCrn] stopping ...
  7. [2019-10-27T14:38:59,855][INFO ][o.e.n.Node

则修改 /etc/sysctl.conf 配置,修改如下:

打开配置文件 sysctl.conf

vim /etc/sysctl.conf 

在配置文件最下面,添加如下配置:

vm.max_map_count=655360

修改完成后,执行此命令

sysctl -p

然后,重新启动elasticsearch,即可启动成功;

以上是没有指定配置文件的简单启动,下面介绍配置文件启动方法:

笔者将elasticsearch的volume路径放在/usr/local/elasticsearch下,同时创建data,conf,logs文件夹

在宿主机/usr/local/elasticsearch/conf下创建文件elasticsearch.yml

  1. cluster.name: elk-cluster
  2. path.data: /usr/local/elasticsearch/data
  3. path.logs: /usr/local/elasticsearch/logs
  4. network.host: 0.0.0.0
  5. http.port: 9200

同时解决文件访问权限问题:

chmod 777 /usr/local/elaticsearch/*

 

运行docker镜像:

  1. docker run -d --name elasticsearch \
  2. -v /usr/local/elasticsearch/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
  3. -v /usr/local/elasticsearch/data:/usr/local/elasticsearch/data \
  4. -v /usr/local/elasticsearch/logs:/usr/local/elasticsearch/logs \
  5. -p 9200:9200 -p 9300:9300 elasticsearch:6.8.10

浏览器访问 http://ip:9200,如果出现以下信息,则启动成功:

 

 

logstash:

 拉取镜像:

docker pull logstash:6.8.10

将文件挂载在宿主机/usr/local/logstash下,创建相关文件夹

在/usr/local/logstash/conf中创建文件:log4j2.properties 

  1. logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
  2. logger.elasticsearchoutput.level = debug

然后创建一个空的配置文件:logstash.yml 

创建启动文件:myboot.conf

  1. input {
  2. tcp {
  3. port => 4560
  4. mode => "server"
  5. tags => ["tags"]
  6. codec => json_lines
  7. }
  8. }
  9. output {
  10. elasticsearch {
  11. hosts => "192.168.2.40:9200"
  12. index => "boot-demo-%{+YYYY.MM.dd}"
  13. }
  14. }

创建配置文件:pipelines.yml 

  1. - pipeline.id: my-logstash
  2. path.config: "/usr/share/logstash/config/myboot.conf"
  3. pipeline.workers: 3

然后就可以启动logstash了:

  1. docker run -d --name=logstash -p 9600:9600 -p 4560:4560 -it \
  2. -v /usr/local/logstash/conf/:/usr/share/logstash/config/ logstash:6.8.10

 查看日志:

发现已经启动成功了

 

kibana:

拉取镜像:

docker pull kibana:6.8.10

直接启动:这里需要指定下elasticsearch的访问地址

docker run -d --name kibana -e ELASTICSEARCH_URL=http://192.168.2.41:9200 -p 5601:5601 kibana:6.8.10

 访问http://ip:5601,已经可以访问了

 

springboot项目

由于测试项目比较简单,测试时手动产生一些日志,抛出一些错误日志即可,这里就直接把logback-spring.xml代码贴出来

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <configuration>
  3. <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  4. <springProperty scope="context" name="springAppName" source="spring.application.name"/>
  5. <contextName>logback</contextName>
  6. <!-- 彩色日志 -->
  7. <!-- 彩色日志依赖的渲染类 -->
  8. <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
  9. <conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
  10. <conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
  11. <!-- 控制台的日志输出样式 -->
  12. <property name="CONSOLE_LOG_PATTERN"
  13. value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint}[traceId:%X{X-B3-TraceId}][spanId:%X{X-B3-SpanId}] %clr([%-5level]) %clr([pid:${PID:- }]){magenta} [%logger:%line] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />
  14. <property name="LOG_PATTERN"
  15. value="%d{yyyy-MM-dd HH:mm:ss.SSS}[traceId:%X{X-B3-TraceId}][spanId:%X{X-B3-SpanId}] [%-5level] [pid:${PID:- }]{magenta} [%logger:%line] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
  16. <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
  17. <encoder charset="utf-8">
  18. <pattern>${CONSOLE_LOG_PATTERN}</pattern>
  19. <charset>UTF-8</charset>
  20. </encoder>
  21. </appender>
  22. <appender name="log_stash_debug" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  23. <destination>192.168.2.41:4560</destination>
  24. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  25. <level>DEBUG</level>
  26. </filter>
  27. <includeCallerData>true</includeCallerData>
  28. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  29. <providers>
  30. <timestamp>
  31. <timeZone>Asia/Shanghai</timeZone>
  32. </timestamp>
  33. <!--自定义日志输出格式-->
  34. <pattern>
  35. <pattern>
  36. {
  37. "tags": ["bootDemo-debug"],
  38. "project": "myBootDemo",
  39. "log_type": "log_stash_debug",
  40. "level": "%level",
  41. "service": "${springAppName:-}",
  42. "pid": "${PID:-}",
  43. "thread": "%thread",
  44. "class": "%logger:%line",
  45. "message": "%message",
  46. "stack_trace": "%exception{20}"
  47. }
  48. </pattern>
  49. </pattern>
  50. </providers>
  51. </encoder>
  52. <encoder class="net.logstash.logback.encoder.LogstashEncoder">
  53. </encoder>
  54. </appender>
  55. <appender name="log_stash_error" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  56. <destination>192.168.2.41:4560</destination>
  57. <filter class="ch.qos.logback.classic.filter.LevelFilter">
  58. <level>ERROR</level>
  59. <onMatch>ACCEPT</onMatch>
  60. <onMismatch>DENY</onMismatch>
  61. </filter>
  62. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  63. <providers>
  64. <timestamp>
  65. <timeZone>Asia/Shanghai</timeZone>
  66. </timestamp>
  67. <pattern>
  68. <pattern>
  69. {
  70. "tags": ["bootDemo-error"],
  71. "project": "myBootDemo",
  72. "log_type": "log_stash_error",
  73. "level": "%level",
  74. "service": "${springAppName:-}",
  75. "pid": "${PID:-}",
  76. "thread": "%thread",
  77. "class": "%logger:%line",
  78. "message": "%message",
  79. "stack_trace": "%exception{20}"
  80. }
  81. </pattern>
  82. </pattern>
  83. </providers>
  84. </encoder>
  85. <connectionStrategy>
  86. <roundRobin>
  87. <connectionTTL>5 minutes</connectionTTL>
  88. </roundRobin>
  89. </connectionStrategy>
  90. </appender>
  91. <appender name="log_stash_business"
  92. class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  93. <destination>192.168.2.41:4560</destination>
  94. <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  95. <providers>
  96. <timestamp>
  97. <timeZone>Asia/Shanghai</timeZone>
  98. </timestamp>
  99. <pattern>
  100. <pattern>
  101. {
  102. "tags": ["bootDemo-business"],
  103. "project": "myBootDemo",
  104. "log_type": "log_stash_business",
  105. "level": "%level",
  106. "service": "${springAppName:-}",
  107. "pid": "${PID:-}",
  108. "thread": "%thread",
  109. "class": "%logger:%line",
  110. "message": "%message",
  111. "stack_trace": "%exception{20}"
  112. }
  113. </pattern>
  114. </pattern>
  115. </providers>
  116. </encoder>
  117. <connectionStrategy>
  118. <roundRobin>
  119. <connectionTTL>5 minutes</connectionTTL>
  120. </roundRobin>
  121. </connectionStrategy>
  122. </appender>
  123. <root level="info">
  124. <appender-ref ref="CONSOLE" />
  125. <appender-ref ref="log_stash_debug"/>
  126. <appender-ref ref="log_stash_error"/>
  127. </root>
  128. <logger name="org.slf4j" level="info"/>
  129. <logger name="org.springframework" level="info"/>
  130. <logger name="org.apache" level="info"/>
  131. <logger name="pres.jeremy.testdemo" level="debug">
  132. <appender-ref ref="log_stash_business"/>
  133. </logger>
  134. </configuration>

这里主要收集debug,error以及其他测试的业务日志

 

由于在logstash启动配置文件中,输出到es的index是按照日期区分的,在kibana控制台中,Mangement->Kibana-> Index Patterns->reate index pattern定义elasticsearch 的index通配表达式

 

一个简单的案例就配置完成了

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/994961
推荐阅读
相关标签
  

闽ICP备14008679号