赞
踩
拉取镜像:
docker pull elasticsearch:6.8.10
单机运行:
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 elasticsearch:6.8.10
你的ip:9200,可以访问则可以直接启动成功,但是如果类似出现以下错误
- [2019-10-27T14:38:59,356][INFO ][o.e.n.Node ] [kniXCrn] starting ...
- [2019-10-27T14:38:59,712][INFO ][o.e.t.TransportService ] [kniXCrn] publish_address {172.17.0.6:9300}, bound_addresses {[::]:9300}
- [2019-10-27T14:38:59,754][INFO ][o.e.b.BootstrapChecks ] [kniXCrn] bound or publishing to a non-loopback address, enforcing bootstrap checks
- ERROR: [1] bootstrap checks failed
- [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
- [2019-10-27T14:38:59,827][INFO ][o.e.n.Node ] [kniXCrn] stopping ...
- [2019-10-27T14:38:59,855][INFO ][o.e.n.Node
则修改 /etc/sysctl.conf 配置,修改如下:
打开配置文件 sysctl.conf
vim /etc/sysctl.conf
在配置文件最下面,添加如下配置:
vm.max_map_count=655360
修改完成后,执行此命令
sysctl -p
然后,重新启动elasticsearch,即可启动成功;
以上是没有指定配置文件的简单启动,下面介绍配置文件启动方法:
笔者将elasticsearch的volume路径放在/usr/local/elasticsearch下,同时创建data,conf,logs文件夹
在宿主机/usr/local/elasticsearch/conf下创建文件elasticsearch.yml
- cluster.name: elk-cluster
- path.data: /usr/local/elasticsearch/data
- path.logs: /usr/local/elasticsearch/logs
- network.host: 0.0.0.0
- http.port: 9200
同时解决文件访问权限问题:
chmod 777 /usr/local/elaticsearch/*
运行docker镜像:
- docker run -d --name elasticsearch \
- -v /usr/local/elasticsearch/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
- -v /usr/local/elasticsearch/data:/usr/local/elasticsearch/data \
- -v /usr/local/elasticsearch/logs:/usr/local/elasticsearch/logs \
- -p 9200:9200 -p 9300:9300 elasticsearch:6.8.10
浏览器访问 http://ip:9200,如果出现以下信息,则启动成功:
logstash:
拉取镜像:
docker pull logstash:6.8.10
将文件挂载在宿主机/usr/local/logstash下,创建相关文件夹
在/usr/local/logstash/conf中创建文件:log4j2.properties
- logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
- logger.elasticsearchoutput.level = debug
然后创建一个空的配置文件:logstash.yml
创建启动文件:myboot.conf
- input {
- tcp {
- port => 4560
- mode => "server"
- tags => ["tags"]
- codec => json_lines
- }
- }
- output {
- elasticsearch {
- hosts => "192.168.2.40:9200"
- index => "boot-demo-%{+YYYY.MM.dd}"
- }
- }
创建配置文件:pipelines.yml
- - pipeline.id: my-logstash
- path.config: "/usr/share/logstash/config/myboot.conf"
- pipeline.workers: 3
然后就可以启动logstash了:
- docker run -d --name=logstash -p 9600:9600 -p 4560:4560 -it \
- -v /usr/local/logstash/conf/:/usr/share/logstash/config/ logstash:6.8.10
查看日志:
发现已经启动成功了
kibana:
拉取镜像:
docker pull kibana:6.8.10
直接启动:这里需要指定下elasticsearch的访问地址
docker run -d --name kibana -e ELASTICSEARCH_URL=http://192.168.2.41:9200 -p 5601:5601 kibana:6.8.10
访问http://ip:5601,已经可以访问了
springboot项目
由于测试项目比较简单,测试时手动产生一些日志,抛出一些错误日志即可,这里就直接把logback-spring.xml代码贴出来
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration>
-
- <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
-
- <springProperty scope="context" name="springAppName" source="spring.application.name"/>
-
- <contextName>logback</contextName>
-
- <!-- 彩色日志 -->
- <!-- 彩色日志依赖的渲染类 -->
- <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
- <conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
- <conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
-
- <!-- 控制台的日志输出样式 -->
- <property name="CONSOLE_LOG_PATTERN"
- value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint}[traceId:%X{X-B3-TraceId}][spanId:%X{X-B3-SpanId}] %clr([%-5level]) %clr([pid:${PID:- }]){magenta} [%logger:%line] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />
-
- <property name="LOG_PATTERN"
- value="%d{yyyy-MM-dd HH:mm:ss.SSS}[traceId:%X{X-B3-TraceId}][spanId:%X{X-B3-SpanId}] [%-5level] [pid:${PID:- }]{magenta} [%logger:%line] %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
-
- <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
- <encoder charset="utf-8">
- <pattern>${CONSOLE_LOG_PATTERN}</pattern>
- <charset>UTF-8</charset>
- </encoder>
- </appender>
-
- <appender name="log_stash_debug" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
- <destination>192.168.2.41:4560</destination>
- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
- <level>DEBUG</level>
- </filter>
- <includeCallerData>true</includeCallerData>
- <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
- <providers>
- <timestamp>
- <timeZone>Asia/Shanghai</timeZone>
- </timestamp>
- <!--自定义日志输出格式-->
- <pattern>
- <pattern>
- {
- "tags": ["bootDemo-debug"],
- "project": "myBootDemo",
- "log_type": "log_stash_debug",
- "level": "%level",
- "service": "${springAppName:-}",
- "pid": "${PID:-}",
- "thread": "%thread",
- "class": "%logger:%line",
- "message": "%message",
- "stack_trace": "%exception{20}"
- }
- </pattern>
- </pattern>
- </providers>
- </encoder>
- <encoder class="net.logstash.logback.encoder.LogstashEncoder">
- </encoder>
- </appender>
-
- <appender name="log_stash_error" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
- <destination>192.168.2.41:4560</destination>
- <filter class="ch.qos.logback.classic.filter.LevelFilter">
- <level>ERROR</level>
- <onMatch>ACCEPT</onMatch>
- <onMismatch>DENY</onMismatch>
- </filter>
- <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
- <providers>
- <timestamp>
- <timeZone>Asia/Shanghai</timeZone>
- </timestamp>
- <pattern>
- <pattern>
- {
- "tags": ["bootDemo-error"],
- "project": "myBootDemo",
- "log_type": "log_stash_error",
- "level": "%level",
- "service": "${springAppName:-}",
- "pid": "${PID:-}",
- "thread": "%thread",
- "class": "%logger:%line",
- "message": "%message",
- "stack_trace": "%exception{20}"
- }
- </pattern>
- </pattern>
- </providers>
- </encoder>
- <connectionStrategy>
- <roundRobin>
- <connectionTTL>5 minutes</connectionTTL>
- </roundRobin>
- </connectionStrategy>
- </appender>
-
- <appender name="log_stash_business"
- class="net.logstash.logback.appender.LogstashTcpSocketAppender">
- <destination>192.168.2.41:4560</destination>
- <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
- <providers>
- <timestamp>
- <timeZone>Asia/Shanghai</timeZone>
- </timestamp>
- <pattern>
- <pattern>
- {
- "tags": ["bootDemo-business"],
- "project": "myBootDemo",
- "log_type": "log_stash_business",
- "level": "%level",
- "service": "${springAppName:-}",
- "pid": "${PID:-}",
- "thread": "%thread",
- "class": "%logger:%line",
- "message": "%message",
- "stack_trace": "%exception{20}"
- }
- </pattern>
- </pattern>
- </providers>
- </encoder>
- <connectionStrategy>
- <roundRobin>
- <connectionTTL>5 minutes</connectionTTL>
- </roundRobin>
- </connectionStrategy>
- </appender>
-
- <root level="info">
- <appender-ref ref="CONSOLE" />
- <appender-ref ref="log_stash_debug"/>
- <appender-ref ref="log_stash_error"/>
- </root>
-
- <logger name="org.slf4j" level="info"/>
- <logger name="org.springframework" level="info"/>
- <logger name="org.apache" level="info"/>
- <logger name="pres.jeremy.testdemo" level="debug">
- <appender-ref ref="log_stash_business"/>
- </logger>
- </configuration>
这里主要收集debug,error以及其他测试的业务日志
由于在logstash启动配置文件中,输出到es的index是按照日期区分的,在kibana控制台中,Mangement->Kibana-> Index Patterns->reate index pattern定义elasticsearch 的index通配表达式
一个简单的案例就配置完成了
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。