赞
踩
解决当前业务日志无监控、无审计,若出现攻击行为无感知问题。
为未来做敏感接口监控做铺垫。
参考:https://zhuanlan.zhihu.com/p/340238202
Filebeat+消息中间件(redis、kafka)+Logstash+Elasticsearch+Kibana
搭建此架构,通过增加中间件,来解耦处理过程。一方面能在流量突发增长时,起到削峰的作用,避免请求超负荷,导致系统崩溃。另一方面,当Logstash出现故障,日志数据还是存在中间件中,当Logstash再次启动,则会读取中间件中积压的日志,避免了数据的丢失。
接下来会按照上述的第三种架构,构建系统。主机配置和每台主机所安装的软件如下:
主机名 | 配置 | 存储 | 系统版本 | 用途 |
---|---|---|---|---|
master | 8C16G | 1T | CentOS7.9 | 部署Elasticsearch、Logstash、Kibana、Kafka(内置Zookeeper) |
node1 | 8C16G | 1T | CentOS7.9 | 部署Elasticsearch、Kafka(内置Zookeeper) |
node2 | 8C16G | 1T | CentOS7.9 | 部署Elasticsearch、Kafka(内置Zookeeper) |
node3 | 4C8G | 1T | CentOS7.9 | 用于部署Filebeat日志采集及其他测试操作、Nginx服务 |
ELK 下载地址:(当前环境使用7.15.2 版本)
https://www.elastic.co/cn/downloads/past-releases#
Kafka下载地址:(当前环境使用kafka_2.12-2.8.2版本)
https://archive.apache.org/dist/kafka/2.8.2/
1)搭建Elasticsearch集群
config/elasticsearch.yml
文件,修改一下配置:cluster.name: es-cluster
node.name: master/node-1/node-2
path.data: /app/elasticsearch-7.15.2/data
path.logs: /app/elasticsearch-7.15.2/logs
network.host: 172.32.10.14/172.32.10.15/172.32.10.16
http.port: 9200
discovery.seed_hosts: ["172.32.10.14", "172.32.10.15","172.32.10.16"]
cluster.initial_master_nodes: ["master"]
config/jvm.options
文件,设置堆内存大小,默认为1G,根据情况修改为:-Xms8g
-Xmx8g
1、配置内存最大映射数,编辑/etc/sysctl.conf文件,添加以下配置后,运行sysctl -p,使配置生效: vm.max_map_count = 655360 2、增加最大文件打开数及最大进程数等,编辑/etc/security/limits.conf,添加以下内容: #* soft nproc 65535 #* hard nproc 65535 # End of file # * soft nofile 65535 * hard nofile 65535 * soft nproc 4096 * hard nproc 8192 注: 1、部署过程中遇到问题,需要重新登录用户才可生效。 2、启动时,需要确认本地是否安装java环境,若无可修改bin/elasticsearch文件,调用java的路径,如下: export JAVA_HOME=/app/elasticsearch-7.15.2/jdk export PATH=$JAVA_HOME/bin:$PATH if [ -x "$JAVA_HOME/bin/java" ]; then JAVA="/app/elasticsearch-7.15.2/jdk/bin/java" else JAVA=`which java` fi
2)部署Kibana
config/kibana.yml
,如下:server.port: 5601
server.host: "172.32.10.14"
elasticsearch.hosts: ["http://172.32.10.14:9200","http://172.32.10.15:9200","http://172.32.10.16:9200"]
kibana.index: ".kibana"
nohup ./kibana-7.15.2-linux-x86_64/bin/kibana >> /home/user/logs/kibana.log &
3)部署Kafka和Zookeeper
config/zookeeper.properties
文件,如下:dataDir=/app/kafka_2.12-2.8.2/zk/data
dataLogDir=/app/kafka_2.12-2.8.2/zk/logs
clientPort=2181
tickTime=2000
initLimit=20
syncLimit=10
server.1=172.32.10.14:2888:3888
server.2=172.32.10.15:2888:3888
server.3=172.32.10.16:2888:3888
maxClientCnxns=0
tree zk zk ├── data │ ├── myid │ └── version-2 │ ├── acceptedEpoch │ ├── currentEpoch │ ├── snapshot.0 │ ├── snapshot.35 │ └── snapshot.49 └── logs └── version-2 ├── log.1 ├── log.100000001 ├── log.200000001 ├── log.200000007 ├── log.36 └── log.400000001
config/server.properties
文件。broker.id=1 listeners=PLAINTEXT://172.32.10.14:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/app/kafka_2.12-2.8.2/kafka-logs num.partitions=3 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=172.32.10.14:2181,172.32.10.15:2181,172.32.10.16:2181 zookeeper.connection.timeout.ms=18000 group.initial.rebalance.delay.ms=0
nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties >> /home/user/zookeeper.log &
nohup ./bin/kafka-server-start.sh config/server.properties >> /home/user/kafka_server.log &
其他命令: 1、创建Topic ./bin/kafka-topics.sh --create --zookeeper 172.32.10.14:2181,172.32.10.15:2181,172.32.10.16:2181 --replication-factor 1 --partitions 1 --topic topic_name 2、查询Topic ./bin/kafka-topics.sh --zookeeper 172.32.10.14:2181,172.32.10.15:2181,172.32.10.16:2181 --list 3、删除Topic ./bin/kafka-topics.sh --delete --zookeeper 172.32.10.14:2181,172.32.10.15:2181,172.32.10.16:2181 --topic topic_name 注意,启动时也存在jdk问题,需要在./bin/kafka-run-class.sh文件修改,如下配置: export JAVA_HOME=/app/elasticsearch-7.15.2/jdk #echo $JAVA_HOME # Which java to use if [ -z "$JAVA_HOME" ]; then JAVA="java" #JAVA="/home/user/jdk1.8.0_361/bin/java" else JAVA="$JAVA_HOME/bin/java" #JAVA="/home/user/jdk1.8.0_361/bin/java" fi
4)部署Logstash
/app/logstash-7.15.2/config/conf
目录添加filter.conf 、input.conf 、output.conf
等配置文件。input{ kafka{ type => "kafka_nginx" codec => "json" topics => ["nginx_access_x","nginx_error_x"] decorate_events => true consumer_threads => 5 bootstrap_servers => "172.32.10.14:9092,172.32.10.15:9092,172.32.10.16:9092" } } filter { if [@metadata][kafka][topic] == "nginx_access_zd" { grok { match => { "message" => "%{IP:remote_addr} - %{DATA:remote_user} \[%{HTTPDATE:request_time}\] \"%{WORD:request_method} %{URIPATHPARAM:url_args} %{URIPROTO:protocol}/%{DATA:treaty}\" %{NUMBER:status} %{NUMBER:body_sent_bytes} \"%{DATA:http_referer}\" \"%{DATA:http_user_agent}\" \"%{DATA:X_Forwarded_For}\" %{DATA:host} %{URIPATH:uri}" } } } if [@metadata][kafka][topic] == "nginx_error_zd" { grok { match => { "message" => "%{DATA:error_time} \[%{DATA:error_type}\] %{DATA:error_info}, client: %{DATA:client}, server: %{DATA:server}, request: %{DATA:request}, host: \"%{DATA:host}\"" } } } } output { if [@metadata][kafka][topic] == "nginx_access_x" { elasticsearch { hosts => ["172.32.10.14","172.32.10.15","172.32.10.16"] index => 'nginx-access-x-%{+YYYY-MM-dd}' } } if [@metadata][kafka][topic] == "nginx_error_x" { elasticsearch { hosts => ["172.32.10.14","172.32.10.15","172.32.10.16"] index => 'nginx-error-x-%{+YYYY-MM-dd}' } } }
nohup ./bin/logstash -f ./config/conf/ --config.reload.automatic >> /home/user/logstash.log &
5)部署Filebeat
filebeat.yml
,配置如下:filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. - type: log enabled: true paths: - /usr/local/nginx/logs/access.log tags: ["nginx_access_x"] fields: topic: nginx_access_x - type: log enabled: true paths: - /usr/local/nginx/logs/error.log tags: ["nginx_error_x"] fields: topic: nginx_error_x output.kafka: hosts: ["172.32.10.14:9092","172.32.10.15:9092","172.32.10.16:9092"] topic: '%{[fields.topic]}' 注: Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'filebeat-7.15.2-linux-x86_64/filebeat.yml') 注释默认output原始配置
nohup ./filebeat-7.15.2-linux-x86_64/filebeat -e -c ./filebeat-7.15.2-linux-x86_64/filebeat.yml >> filebeat.log &
6)配置Kibana的Index
为了将ES中保存的nginx日志取出,还需要配置Kibana的索引,用于查看ES日志。
后续日志切割、字段优化等操作,将根据实际遇到的问题持续更新。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。