当前位置:   article > 正文

centos7 yum安装EFK收集日志

centos7 yum安装EFK收集日志

配置清华源:

Elasticsearch安装

yum安装方式Elasticsearch

1.导入Elasticsearch GPG KEY
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch


2.添加elasticsearch的yum repo文件,使用清华的yum源镜像
默认安装最新版本,如需要指定版本,请下载指定rpm包,并配置java环境变量,放到/usr/bin下

配置好yum源,执行yum localinstall elasticsearch-6.7.0.rpm  -y

  1. cd /etc/yum.repos.d
  2. vi elasticsearch.repo
  3. [elasticsearch-6.x]
  4. name=Elasticsearch repository for 6.x packages
  5. baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/
  6. gpgcheck=1
  7. gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
  8. enabled=1
  9. autorefresh=1
  10. type=rpm-md

3.修该配置文件

  1. vim /etc/elasticsearch/elasticsearch.yml
  2. 修改55
  3. 55 network.host: 0.0.0.0

 

  1. vim /etc/elasticsearch/jvm.options
  2. 修改jvm参数
  3. 22 -Xms128m
  4. 23 -Xmx128m

4.一个进程在VMAs(虚拟内存区域)创建内存映射最大数量  

  1. vim /etc/sysctl.conf
  2. vm.max_map_count=655360

5. #配置生效

sysctl -p

6.启动es

  1. systemctl start elasticsearch
  2. tailf -200 /var/log/elasticsearch/elasticsearch.log
7.通过访问进行测试,看到如下信息,就说明ES启动成功了
  1. {
  2. "name": "dSQV6I8",
  3. "cluster_name": "elasticsearch",
  4. "cluster_uuid": "v5GPTWAtT5emxFdjigFg-w",
  5. "version": {
  6. "number": "6.5.4",
  7. "build_flavor": "default",
  8. "build_type": "tar",
  9. "build_hash": "d2ef93d",
  10. "build_date": "2018-12-17T21:17:40.758843Z",
  11. "build_snapshot": false, "lucene_version": "7.5.0",
  12. "minimum_wire_compatibility_version": "5.6.0",
  13. "minimum_index_compatibility_version": "5.0.0"
  14. },
  15. "tagline": "You Know, for Search"
  16. }

安装elasticsearch-head

由于ES官方并没有为ES提供界面管理工具,仅仅是提供了后台的服务。elasticsearch-head是一个为ES开发的一个页
面客户端工具,其源码托管于GitHub,地址为:https://github.com/mobz/elasticsearch-head
 
head提供了4种安装方式:
 
源码安装,通过npm run start启动(不推荐)
通过docker安装(推荐)
通过chrome插件安装(推荐)
通过ES的plugin方式安装(不推荐)
 

通过docker安装

  1. #拉取镜像 docker pull mobz/elasticsearch-head:5
  2. #创建容器 docker create --name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5
  3. #启动容器 docker start elasticsearch-head

通过浏览器进行访问:

注意:
由于前后端分离开发,所以会存在跨域问题,需要在服务端做CORS的配置,如下:
 
  1. vim elasticsearch.yml
  2. http.cors.enabled: true
  3. http.cors.allow-origin: "*"

解决head插件提交请求406问题方法:

  1. docker exec -it elasticsearch-head /bin/bash
  2. sed -i 's#application/x-www-form-urlencoded#application/json;charset=UTF-8#g' ./_site/vendor.js
  3. docker restart elasticsearch-head

配置了x-pack安全认证之后,就显示连接失败了。。。

通过chrome插件的方式安装不存在该问题。

打开chrome的应用商店,即可安装https://chrome.google.com/webstore/detail/elasticsearch-head/ffmkiejjmecolpfloofpjologoblkegm

配置安装kibana

还是基于清华源进行指定版本的yum本地安装

配置好yum源
yum localinstall kibana-6.7.0-x86_64.rpm -y

编辑配置文件

  1. vim /etc/kibana/kibana.yml
  2. server.port: 5601
  3. server.host: "10.4.7.101"
  4. elasticsearch.hosts: ["http://10.4.7.101:9200"]

启动kibana

  1. systemctl start kibana
  2. systemctl enable kibana

访问kinaba

http://10.4.7.101:5601/

x-pack破解为铂金版

参考文章链接

https://blog.csdn.net/weixin_45396564/article/details/103420345

yum安装方式破解x-pack填坑大集合

1、上传许可证书的时候上传到/etc/elasticsearch/下(权限660),并修改属主属组

2、启动x-pack安全插件报错
curl -H "Content-Type:application/json" -XPOST http://10.4.7.101:9200/_xpack/license/start_trial?acknowledge=true

报错: 

(命令粘贴的有问题,空格该删的,删一删,重新敲空格)
正确返回结果:
 
 
3、为es创建用户名密码
bin/elasticsearch-setup-passwords interactive

报错:

需要在elasticsearch的配置文件配置:
xpack.security.enabled: true

重启es

4、导入ssl证书

证书文件一定放在/etc/elasticsearch/并设置660权限,修改属主属组,否则会有文件读取权限问题

5、导入证书

curl -XPUT -u elastic 'http://10.4.7.101:9200/_license?acknowledge=true' -H "Content-Type: application/json" -d @/etc/elasticsearch/kc-lisence.json

报错:

证书路径不对,或者证书不存在,提示证书内容为空
 
报错:
 
 
不能都是双引号包起来
 
正确返回结果:
 
 
 
6、/etc/elasticsearch/elasticsearch.yml
network.host: 10.4.7.101,127.0.0.1

生成ssl证书的时候写的ip地址如果为主机ip的话,配置文件要按如上编写

报错:

展示下破解完成的成果
 
一定要记录刚刚设置的那就个用户的密码,不同的权限登录kibana使用
 
 

日志收集配置

1.filebeat收集Nginx的json格式日志


  1. 1.普通Nginx日志不足的地方:
  2. - 日志都在一个value里,不能拆分单独显示和搜索
  3. - 索引名称没有意义
  4. 2.理想中的情况
  5. {
  6. $remote_addr : 192.168.12.254
  7. - : -
  8. $remote_user : -
  9. [$time_local]: [10/Sep/2019:10:52:08 +0800]
  10. $request: GET /jhdgsjfgjhshj HTTP/1.0
  11. $status : 404
  12. $body_bytes_sent : 153
  13. $http_referer : -
  14. $http_user_agent :ApacheBench/2.3
  15. $http_x_forwarded_for:-
  16. }
  17. 3.目标
  18. 将Nginx日志转换成json格式
  19. 4.修改nginx配置文件使日志转换成json
  20. log_format json '{ "time_local": "$time_local", '
  21. '"remote_addr": "$remote_addr", '
  22. '"referer": "$http_referer", '
  23. '"request": "$request", '
  24. '"status": $status, '
  25. '"bytes": $body_bytes_sent, '
  26. '"agent": "$http_user_agent", '
  27. '"x_forwarded": "$http_x_forwarded_for", '
  28. '"up_addr": "$upstream_addr",'
  29. '"up_host": "$upstream_http_host",'
  30. '"upstream_time": "$upstream_response_time",'
  31. '"request_time": "$request_time"'
  32. ' }';
  33. access_log /var/log/nginx/access.log json;
  34. #清空旧日志
  35. [root@db01 ~]# > /var/log/nginx/access.log
  36. #检查并重启nginx
  37. [root@db01 ~]# nginx -t
  38. [root@db01 ~]# systemctl restart nginx
  39. 5.修改filebeat配置文件
  40. cat >/etc/filebeat/filebeat.yml<<EOF
  41. filebeat.inputs:
  42. - type: log
  43. enabled: true
  44. paths:
  45. - /var/log/nginx/access.log
  46. json.keys_under_root: true
  47. json.overwrite_keys: true
  48. output.elasticsearch:
  49. hosts: ["10.0.0.51:9200"]
  50. EOF
  51. 6.删除旧的ES索引
  52. es-head >> filebeat-6.6.0-2019.11.15 >> 动作 >>删除
  53. 7.删除kibana里面的日日志信息
  54. 8.重启filebeat
  55. [root@db01 ~]# systemctl restart filebeat
  56. 9.curl 一下nginx,并在es-head插件查看
  57. [root@db01 ~]# curl 127.0.0.1
  58. db01-www

2.filebeat自定义ES索引名称


  1. 1.理想中的索引名称
  2. filebeat-6.6.0-2020.02.13
  3. nginx-6.6.0-2019.11.15
  4. 2.filebeat配置
  5. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml<<EOF
  6. filebeat.inputs:
  7. - type: log
  8. enabled: true
  9. paths:
  10. - /var/log/nginx/access.log
  11. json.keys_under_root: true
  12. json.overwrite_keys: true
  13. output.elasticsearch:
  14. hosts: ["10.0.0.51:9200"]
  15. index: "nginx-%{[beat.version]}-%{+yyyy.MM}"
  16. setup.template.name: "nginx"
  17. setup.template.pattern: "nginx-*"
  18. setup.template.enabled: false
  19. setup.template.overwrite: true
  20. EOF
  21. 3.重启filebeat
  22. [root@db01 ~]# systemctl restart filebeat
  23. 4.生成新日志并检查
  24. [root@db01 ~]# curl 127.0.0.1
  25. 5.es-head插件查看并在中kibana添加

 

3.filebeat按照服务类型拆分索引


  1. 1.理想中的情况:
  2. nginx-access-6.6.0-2020.02
  3. nginx-error-6.6.0-2020.02
  4. 2.filebeat配置
  5. #第一种方法:
  6. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  7. filebeat.inputs:
  8. - type: log
  9. enabled: true
  10. paths:
  11. - /var/log/nginx/access.log
  12. json.keys_under_root: true
  13. json.overwrite_keys: true
  14. - type: log
  15. enabled: true
  16. paths:
  17. - /var/log/nginx/error.log
  18. output.elasticsearch:
  19. hosts: ["10.0.0.51:9200"]
  20. indices:
  21. - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  22. when.contains:
  23. source: "/var/log/nginx/access.log"
  24. - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  25. when.contains:
  26. source: "/var/log/nginx/error.log"
  27. setup.template.name: "nginx"
  28. setup.template.pattern: "nginx-*"
  29. setup.template.enabled: false
  30. setup.template.overwrite: true
  31. EOF
  32. #第二种方法:
  33. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  34. filebeat.inputs:
  35. - type: log
  36. enabled: true
  37. paths:
  38. - /var/log/nginx/access.log
  39. json.keys_under_root: true
  40. json.overwrite_keys: true
  41. tags: ["access"]
  42. - type: log
  43. enabled: true
  44. paths:
  45. - /var/log/nginx/error.log
  46. tags: ["error"]
  47. output.elasticsearch:
  48. hosts: ["10.0.0.51:9200"]
  49. indices:
  50. - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  51. when.contains:
  52. tags: "access"
  53. - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  54. when.contains:
  55. tags: "error"
  56. setup.template.name: "nginx"
  57. setup.template.pattern: "nginx-*"
  58. setup.template.enabled: false
  59. setup.template.overwrite: true
  60. EOF
  61. 3.重启filebeat
  62. [root@db01 ~]# systemctl restart filebeat
  63. 4.生成正确和错误的测试数据
  64. [root@db01 ~]# curl 127.0.0.1/haahha
  65. [root@db01 ~]# curl 127.0.0.1
  66. 5.检查是否生成对应的索引
  67. nginx-access-6.6.0-2020.02
  68. nginx-error-6.6.0-2020.02

 

4.收集多台服务器nginx日志


  1. 1.在别的服务器上面安装nginx
  2. #更换官方源
  3. [root@db02 ~]# cat /etc/yum.repos.d/nginx.repo
  4. [nginx-stable]
  5. name=nginx stable repo
  6. baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
  7. gpgcheck=1
  8. enabled=1
  9. gpgkey=https://nginx.org/keys/nginx_signing.key
  10. module_hotfixes=true
  11. [nginx-mainline]
  12. name=nginx mainline repo
  13. baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
  14. gpgcheck=1
  15. enabled=0
  16. gpgkey=https://nginx.org/keys/nginx_signing.key
  17. module_hotfixes=true
  18. #安装nginx
  19. [root@db02 ~]# yum install nginx -y
  20. 2.复制db01的nginx的配置文件
  21. [root@db02 ~]# scp 10.0.0.51:/etc/nginx/nginx.conf /etc/nginx/nginx.conf
  22. [root@db02 ~]# scp 10.0.0.51:/etc/nginx/conf.d/www.conf /etc/nginx/conf.d/
  23. 3.创建测试页面
  24. [root@db02 ~]# mkdir /code/www/ -p
  25. [root@db02 ~]# echo "db02-www" > /code/www/index.html
  26. 4.重启nginx
  27. [root@db02 ~]# >/var/log/nginx/access.log
  28. [root@db02 ~]# >/var/log/nginx/error.log
  29. [root@db02 ~]# nginx -t
  30. [root@db02 ~]# systemctl restart nginx
  31. 5.安装filebeat
  32. [root@db02 ~]# rpm -ivh filebeat-6.6.0-x86_64.rpm
  33. 6.复制filebeat配置文件
  34. [root@db02 ~]# scp 10.0.0.51:/etc/filebeat/filebeat.yml /etc/filebeat/
  35. 7.启动filebeat
  36. [root@db02 ~]# systemctl restart filebeat
  37. 8.生成测试数据
  38. [root@db02 ~]# curl 127.0.0.1/22222222222222
  39. [root@db02 ~]# curl 127.0.0.1
  1. #收集nginx完整的filebeat配置
  2. [root@db01]# cat /etc/filebeat/filebeat.yml
  3. filebeat.inputs:
  4. - type: log
  5. enabled: true
  6. paths:
  7. - /var/log/nginx/access.log
  8. json.keys_under_root: true
  9. json.overwrite_keys: true
  10. - type: log
  11. enabled: true
  12. paths:
  13. - /var/log/nginx/error.log
  14. output.elasticsearch:
  15. hosts: ["10.0.0.51:9200"]
  16. indices:
  17. - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  18. when.contains:
  19. source: "/var/log/nginx/access.log"
  20. - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  21. when.contains:
  22. source: "/var/log/nginx/error.log"
  23. setup.template.name: "nginx"
  24. setup.template.pattern: "nginx-*"
  25. setup.template.enabled: false
  26. setup.template.overwrite: true

5.filebeat收集tomcat的json日志

1.安装tomcat、filebeat

tomcat略

在tomcat服务器安装filebeat

filebeat

yum localinstall filebeat-6.7.0-x86_64.rpm -y

2.配置tomcat日志格式为json

  1. [root@web01 ~]# /data/apache-tomcat-8.0.33/bin/shutdown.sh
  2. [root@web01 ~]# vim /data/apache-tomcat-8.0.33/conf/server.xml
  3. 修改参数日志格式
  4. pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>

3.启动tomcat

/data/apache-tomcat-8.0.33/bin/startup.sh

4.配置filebeat

  1. cat >/etc/filebeat/filebeat.yml <<EOF
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. - /data/apache-tomcat-8.0.33/logs/localhost_access_log.*.txt
  7. json.keys_under_root: true
  8. json.overwrite_keys: true
  9. tags: ["tomcat"]
  10. output.elasticsearch:
  11. hosts: ["10.4.7.101:9200"]
  12. index: "tomcat_access-%{[beat.version]}-%{+yyyy.MM}"
  13. username: "elastic"
  14. password: "123456"
  15. xpack.monitoring.enabled: true
  16. xpack.monitoring.elasticsearch:
  17. setup.template.name: "tomcat"
  18. setup.template.pattern: "tomcat_*"
  19. setup.template.enabled: false
  20. setup.template.overwrite: true
  21. EOF

5.重启filebeat 

systemctl restart filebeat

6.访问tomcat查看是否有数据和x-pack监控生成

6.filebeat收集java多行匹配模式

1.filebeat配置文件

  1. [root@db01 ~]# cat /etc/filebeat/filebeat.yml
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. - /var/log/elasticsearch/elasticsearch.log
  7. multiline.pattern: '^\['
  8. multiline.negate: true
  9. multiline.match: after
  10. output.elasticsearch:
  11. hosts: ["10.0.0.51:9200"]
  12. index: "es-%{[beat.version]}-%{+yyyy.MM}"
  13. username: "elastic"
  14. password: "123456"
  15. xpack.monitoring.enabled: true
  16. xpack.monitoring.elasticsearch:
  17. setup.template.name: "tomcat"
  18. setup.template.pattern: "tomcat_*"
  19. setup.template.enabled: false
  20. setup.template.overwrite: true

配置完成重启filebeat 

2.filebeat收集tomcat的catalina.out和access访问日志

  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. paths:
  5. - /data/apache-tomcat-8.0.33/logs/localhost_access_log.*.txt
  6. json.keys_under_root: true
  7. json.overwrite_keys: true
  8. tags: ["access"]
  9. - type: log
  10. enabled: true
  11. paths:
  12. - /data/apache-tomcat-8.0.33/logs/catalina.out
  13. tags: ["catalina"]
  14. output.elasticsearch:
  15. hosts: ["10.4.7.101:9200"]
  16. indices:
  17. - index: "tomcat_access-%{[beat.version]}-%{+yyyy.MM}"
  18. when.contains:
  19. tags: "access"
  20. - index: "tomcat_catalina-%{[beat.version]}-%{+yyyy.MM}"
  21. when.contains:
  22. tags: "catalina"
  23. username: "elastic"
  24. password: "123456"
  25. xpack.monitoring.enabled: true
  26. xpack.monitoring.elasticsearch:
  27. setup.template.name: "tomcat"
  28. setup.template.pattern: "tomcat_*"
  29. setup.template.enabled: false
  30. setup.template.overwrite: true

配置完成重启filebeat

7.filbeat使用模块收集nginx日志


  1. 1.清空并把nginx日志恢复成普通格式
  2. #清空日志
  3. [root@db01 ~]# > /var/log/nginx/access.log
  4. #编辑配置文件
  5. [root@db01 ~]# vim /etc/nginx/nginx.conf
  6. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  7. '$status $body_bytes_sent "$http_referer" '
  8. '"$http_user_agent" "$http_x_forwarded_for"';
  9. access_log /var/log/nginx/access.log main;
  10. #检查并重启
  11. [root@db01 ~]# nginx -t
  12. [root@db01 ~]# systemctl restart nginx
  13. 2.访问并检查日志是否为普通格式
  14. [root@db01 ~]# curl 127.0.0.1
  15. [root@db01 ~]# tail -f /var/log/nginx/access.log
  16. 3.配置filebeat配置文件支持模块
  17. [root@db01 ~]# cat /etc/filebeat/filebeat.yml
  18. filebeat.config.modules:
  19. path: ${path.config}/modules.d/*.yml
  20. reload.enabled: true
  21. reload.period: 10s
  22. output.elasticsearch:
  23. hosts: ["10.0.0.51:9200"]
  24. indices:
  25. - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  26. when.contains:
  27. event.dataset: "nginx.access"
  28. - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  29. when.contains:
  30. event.dataset: "nginx.error"
  31. setup.template.name: "nginx"
  32. setup.template.pattern: "nginx-*"
  33. setup.template.enabled: false
  34. setup.template.overwrite: true
  35. 4.激活filebeat的nginx模块
  36. [root@db01 ~]# filebeat modules enable nginx
  37. [root@db01 ~]# filebeat modules list
  38. [root@db01 ~]# ll /etc/filebeat/modules.d/nginx.yml
  39. -rw-r--r-- 1 root root 369 Jan 24 2019 /etc/filebeat/modules.d/nginx.yml
  40. 5.配置filebeat的nginx模块配置文件
  41. [root@db01 ~]# cat >/etc/filebeat/modules.d/nginx.yml <<EOF
  42. - module: nginx
  43. access:
  44. enabled: true
  45. var.paths: ["/var/log/nginx/access.log"]
  46. error:
  47. enabled: true
  48. var.paths: ["/var/log/nginx/error.log"]
  49. EOF
  50. 6.es安装filebeat的nginx模块必要插件并重启
  51. #上传插件
  52. [root@db01 ~]# ll
  53. -rw-r--r-- 1 root root 33255554 Jan 8 08:15 ingest-geoip-6.6.0.zip
  54. -rw-r--r-- 1 root root 62173 Jan 8 08:15 ingest-user-agent-6.6.0.zip
  55. #切换目录并安装插件
  56. [root@db01 ~]# cd /usr/share/elasticsearch/
  57. [root@db01 ~]# ./bin/elasticsearch-plugin install file:///root/ingest-geoip-6.6.0.zip
  58. 注意安装时候需要输入 “y” 确认
  59. [root@db01 ~]# ./bin/elasticsearch-plugin install file:///root/ingest-user-agent-6.6.0.zip
  60. [root@db01 ~]# systemctl restart elasticsearch
  61. 7.重启filebeat
  62. [root@db01 ~]# systemctl restart filebeat
  63. 8.删除es-head插件中原有nginx的数据和ibana中的ngixn数据
  64. 生成新的日志数据,es-head插件更新查看,kibana添加

8.filebeat使用模块收集mysql慢日志


  1. #二进制安装
  2. 1.下载或上传软件包
  3. wget https://downloads.mysql.com/archives/get/file/mysql-5.6.44-linux-glibc2.12-x86_64.tar.gz
  4. 2.解压
  5. [root@db01 ~]# tar xf mysql-5.6.44-linux-glibc2.12-x86_64.tar.gz
  6. [root@db01 ~]# ll
  7. total 321404
  8. drwxr-xr-x 13 root root 191 Oct 31 04:31 mysql-5.6.44-linux-glibc2.12-x86_64
  9. -rw-r--r-- 1 root root 329105487 Oct 30 10:23 mysql-5.6.44-linux-glibc2.12-x86_64.tar.gz
  10. 3.安装依赖软件包
  11. [root@db01 ~]# yum install -y autoconf libaio*
  12. 4.创建 mysql 用户
  13. [root@db01 ~]# useradd mysql -s /sbin/nologin -M
  14. [root@db01 ~]# id mysql
  15. uid=1000(mysql) gid=1000(mysql) groups=1000(mysql)
  16. 5.将解压后的软件包目录移动到 /opt 目录下面并更改文件名
  17. [root@db01 ~]# mv mysql-5.6.44-linux-glibc2.12-x86_64 /opt/mysql-5.6.44
  18. [root@db01 ~]# cd /opt/mysql-5.6.44/
  19. [root@db01 /opt/mysql-5.6.44]# ll
  20. total 40
  21. drwxr-xr-x 2 root root 4096 Oct 31 04:31 bin
  22. -rw-r--r-- 1 7161 31415 17987 Mar 15 2019 COPYING
  23. drwxr-xr-x 3 root root 18 Oct 31 04:30 data
  24. drwxr-xr-x 2 root root 55 Oct 31 04:30 docs
  25. drwxr-xr-x 3 root root 4096 Oct 31 04:30 include
  26. drwxr-xr-x 3 root root 316 Oct 31 04:31 lib
  27. drwxr-xr-x 4 root root 30 Oct 31 04:30 man
  28. drwxr-xr-x 10 root root 291 Oct 31 04:30 mysql-test
  29. -rw-r--r-- 1 7161 31415 2496 Mar 15 2019 README
  30. drwxr-xr-x 2 root root 30 Oct 31 04:30 scripts
  31. drwxr-xr-x 28 root root 4096 Oct 31 04:31 share
  32. drwxr-xr-x 4 root root 4096 Oct 31 04:31 sql-bench
  33. drwxr-xr-x 2 root root 136 Oct 31 04:30 support-files
  34. 6.制作软连接
  35. [root@db01 ~]# ln -s /opt/mysql-5.6.44/ /opt/mysql
  36. [root@db01 ~]# ll /opt/mysql
  37. lrwxrwxrwx 1 root root 18 Oct 31 04:37 /opt/mysql -> /opt/mysql-5.6.44/
  38. 7.拷贝启动脚本
  39. [root@db01 /opt/mysql-5.6.44]# cd /opt/mysql-5.6.44/support-files/
  40. [root@db01 /opt/mysql-5.6.44/support-files]# cp mysql.server /etc/init.d/mysqld
  41. [root@db01 /opt/mysql-5.6.44/support-files]# ll /etc/init.d/mysqld
  42. -rwxr-xr-x 1 root root 10565 Oct 31 04:40 /etc/init.d/mysqld
  43. 8.拷贝配置文件
  44. [root@db01 /opt/mysql-5.6.44/support-files]# cp my-default.cnf /etc/my.cnf
  45. cp: overwrite ‘/etc/my.cnf’? y
  46. [root@db01 /opt/mysql-5.6.44/support-files]# ll /etc/my.cnf
  47. -rw-r--r--. 1 root root 1126 Oct 31 04:41 /etc/my.cnf
  48. 9.初始化数据库
  49. [root@db01 /opt/mysql-5.6.44/support-files]# cd ../scripts/
  50. [root@db01 /opt/mysql-5.6.44/scripts]# ll
  51. total 36
  52. -rwxr-xr-x 1 7161 31415 34558 Mar 15 2019 mysql_install_db
  53. [root@db01 /opt/mysql-5.6.44/scripts]# ./mysql_install_db --basedir=/opt/mysql --datadir=/opt/mysql/data --user=mysql
  54. #只要有两个ok就行
  55. 10.授权 mysql 目录
  56. [root@db01 /opt/mysql-5.6.44/scripts]# chown -R mysql.mysql /opt/mysql-5.6.44/
  57. [root@db01 /opt/mysql-5.6.44/scripts]# ll /opt/
  58. total 0
  59. lrwxrwxrwx 1 mysql mysql 18 Oct 31 04:37 mysql -> /opt/mysql-5.6.44/
  60. drwxr-xr-x 13 mysql mysql 223 Oct 31 04:43 mysql-5.6.44
  61. 11.修改 mysql 启动脚本和程序
  62. [root@db01 /opt/mysql-5.6.44/scripts]# sed -i 's#/usr/local#/opt#g' /etc/init.d/mysqld /opt/mysql/bin/mysqld_safe
  63. 12.启动 mysqkl
  64. [root@db01 /opt/mysql-5.6.44/scripts]# /etc/init.d/mysqld start
  65. Starting MySQL.Logging to '/opt/mysql/data/db01.err'.
  66. SUCCESS!
  67. 13.添加环境变量
  68. [root@db01 /opt/mysql-5.6.44/scripts]# vim /etc/profile.d/mysql.sh
  69. export PATH="/opt/mysql/bin:$PATH"
  70. [root@db01 /opt/mysql-5.6.44/scripts]# source /etc/profile.d/mysql.sh
  71. 14.登录mysql数据库
  72. [root@db01 /opt/mysql-5.6.44/scripts]# mysql
  73. Welcome to the MySQL monitor. Commands end with ; or \g.
  74. Your MySQL connection id is 1
  75. Server version: 5.6.44 MySQL Community Server (GPL)
  76. Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
  77. Oracle is a registered trademark of Oracle Corporation and/or its
  78. affiliates. Other names may be trademarks of their respective
  79. owners.
  80. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  81. mysql>
  82. ==============================================================================
  83. #filebeat使用模块收集mysql慢日志
  84. 1.配置mysql错误日志和慢日志路径
  85. 编辑my.cnf
  86. [root@db01 ~]# vim /etc/my.cnf
  87. [mysqld]
  88. slow_query_log=ON
  89. slow_query_log_file=/opt/mysql/data/slow.log
  90. long_query_time=1
  91. 2.重启mysql并制造慢日志
  92. [root@db01 ~]# /etc/init.d/mysqld restart
  93. 3.慢日志制造语句
  94. mysql<
  95. select sleep(2) user,host from mysql.user ;
  96. 4.确认慢日志和错误日志确实有生成
  97. [root@db01 ~]# mysql -e "show variables like '%slow_query_log%'"
  98. +---------------------+----------------------------------+
  99. | Variable_name | Value |
  100. +---------------------+----------------------------------+
  101. | slow_query_log | ON |
  102. | slow_query_log_file | /opt/mysql/data/slow.log |
  103. +---------------------+----------------------------------+
  104. 5.激活filebeat的mysql模块
  105. [root@db01 ~]# filebeat modules enable mysql
  106. 6.配置mysql的模块
  107. [root@db01 ~]# cat /etc/filebeat/modules.d/mysql.yml
  108. - module: mysql
  109. # Error logs
  110. error:
  111. enabled: true
  112. var.paths: ["/opt/mysql/data/db01.err"]
  113. # Slow logs
  114. slowlog:
  115. enabled: true
  116. var.paths: ["/opt/mysql/data/slow.log"]
  117. 7.配置filebeat根据日志类型做判断
  118. [root@db01 ~]# cat /etc/filebeat/filebeat.yml
  119. filebeat.config.modules:
  120. path: ${path.config}/modules.d/*.yml
  121. reload.enabled: true
  122. reload.period: 10s
  123. output.elasticsearch:
  124. hosts: ["10.0.0.51:9200"]
  125. indices:
  126. - index: "mysql_slow-%{[beat.version]}-%{+yyyy.MM}"
  127. when.contains:
  128. source: "/opt/mysql/data/slow.log"
  129. - index: "mysql_error-%{[beat.version]}-%{+yyyy.MM}"
  130. when.contains:
  131. source: "/opt/mysql/data/db01.err"
  132. setup.template.name: "mysql"
  133. setup.template.pattern: "mysql-*"
  134. setup.template.enabled: false
  135. setup.template.overwrite: true
  136. 8.重启filebeat
  137. [root@db01 ~]# systemctl restart filebeat
  138. 9.生成慢日志数据
  139. mysql> select sleep(2) user,host from mysql.user ;
  140. +------+-----------+
  141. | user | host |
  142. +------+-----------+
  143. | 0 | 127.0.0.1 |
  144. | 0 | ::1 |
  145. | 0 | db01 |
  146. | 0 | db01 |
  147. | 0 | localhost |
  148. | 0 | localhost |
  149. +------+-----------+
  150. 6 rows in set (12.01 sec)
  151. 10.登录es-head插件查询和kibana添加查询

收集docker日志

1.filebeat收集docker类型日志 ( 普通版本)


  1. 1.安装dockder
  2. [root@db02 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  3. [root@db02 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
  4. [root@db02 ~]# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  5. [root@db02 ~]# yum makecache fast
  6. [root@db02 ~]# yum install docker-ce -y
  7. [root@db02 ~]# mkdir -p /etc/docker
  8. [root@db02 ~]# tee /etc/docker/daemon.json <<-'EOF'
  9. {
  10. "registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"]
  11. }
  12. EOF
  13. [root@db02 ~]# systemctl daemon-reload
  14. [root@db02 ~]# systemctl restart docker
  15. 2.启动2个Nginx容器并访问测试
  16. [root@db02 ~]# docker run -d -p 80:80 nginx
  17. [root@db02 ~]# docker run -d -p 8080:80 nginx
  18. 3.测试数据是否能通
  19. [root@db02 ~]# curl 10.0.0.52
  20. [root@db02 ~]# curl 10.0.0.52:8080
  21. 4.配置filebeat
  22. [root@db02 ~]# cat /etc/filebeat/filebeat.yml
  23. filebeat.inputs:
  24. - type: docker
  25. containers.ids:
  26. - '*'
  27. output.elasticsearch:
  28. hosts: ["10.0.0.51:9200"]
  29. index: "docker-%{[beat.version]}-%{+yyyy.MM}"
  30. setup.template.name: "docker"
  31. setup.template.pattern: "docker-*"
  32. setup.template.enabled: false
  33. setup.template.overwrite: true
  34. 5.重启filebeat
  35. [root@db02 ~]# systemctl restart filebeat
  36. 6.重启es
  37. [root@db02 ~]# systemctl restart elasticsearch
  38. 7.访问生成测试数据
  39. [root@db02 ~]# curl 10.0.0.52/1111111111
  40. [root@db02 ~]# curl 10.0.0.52:8080/2222222222
  41. 8.登录es-head插件查询和kibana添加


2.filebeat收集docker日志使用docker-compose按服务拆分索引


  1. 1.假设的场景
  2. nginx容器 80端口
  3. toncat容器 8080端口
  4. 2.理想中的索引名称
  5. docker-nginx-6.6.0-2020.02
  6. docker-tomcat-6.6.0-2020.02
  7. 3.理想的日志记录格式
  8. nginx容器日志:
  9. {
  10. "log": "xxxxxx",
  11. "stream": "stdout",
  12. "time": "xxxx"
  13. "service": "nginx"
  14. }
  15. tomcat容器日志:
  16. {
  17. "log": "xxxxxx",
  18. "stream": "stdout",
  19. "time": "xxxx"
  20. "service": "tomcat"
  21. }
  22. 4.docker-compose配置
  23. [root@db02 ~]# yum install docker-compose -y
  24. [root@db02 ~]# cat >docker-compose.yml<<EOF
  25. version: '3'
  26. services:
  27. nginx:
  28. image: nginx:latest
  29. labels:
  30. service: nginx
  31. logging:
  32. options:
  33. labels: "service"
  34. ports:
  35. - "80:80"
  36. tomcat:
  37. image: nginx:latest
  38. labels:
  39. service: tomcat
  40. logging:
  41. options:
  42. labels: "service"
  43. ports:
  44. - "8080:80"
  45. EOF
  46. 5.删除旧的容器
  47. [root@db02 ~]# docker stop $(docker ps -q)
  48. [root@db02 ~]# docker rm $(docker ps -qa)
  49. 6.启动容器
  50. [root@db02 ~]# docker-compose up -d
  51. 7.配置filebeat
  52. [root@db02 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  53. filebeat.inputs:
  54. - type: log
  55. enabled: true
  56. paths:
  57. - /var/lib/docker/containers/*/*-json.log
  58. json.keys_under_root: true
  59. json.overwrite_keys: true
  60. output.elasticsearch:
  61. hosts: ["10.0.0.51:9200"]
  62. indices:
  63. - index: "docker-nginx-%{[beat.version]}-%{+yyyy.MM}"
  64. when.contains:
  65. attrs.service: "nginx"
  66. - index: "docker-tomcat-%{[beat.version]}-%{+yyyy.MM}"
  67. when.contains:
  68. attrs.service: "tomcat"
  69. setup.template.name: "docker"
  70. setup.template.pattern: "docker-*"
  71. setup.template.enabled: false
  72. setup.template.overwrite: true
  73. EOF
  74. 8.重启filebeat
  75. [root@db02 ~]# systemctl restart filebeat
  76. 9.生成访问日志
  77. [root@db02 ~]# curl 127.0.0.1/nginxxxxxxxxxxx
  78. [root@db02 ~]# curl 127.0.0.1:8080/dbbbbbbbbb
  79. 10.es-head插件查看


3.filebeat收集docker日志 ,按照日志类型,access/error拆分


  1. 1.之前收集的docker日志目前不完善的地方
  2. 正常日志和报错日志放在一个索引里了
  3. 2.理想中的索引名称
  4. docker-nginx-access-6.6.0-2020.02
  5. docker-nginx-error-6.6.0-2020.02
  6. docker-db-access-6.6.0-2020.02
  7. docker-db-error-6.6.0-2020.02
  8. 3.filebeat配置文件
  9. [root@db02 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  10. filebeat.inputs:
  11. - type: log
  12. enabled: true
  13. paths:
  14. - /var/lib/docker/containers/*/*-json.log
  15. json.keys_under_root: true
  16. json.overwrite_keys: true
  17. output.elasticsearch:
  18. hosts: ["10.0.0.51:9200"]
  19. indices:
  20. - index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  21. when.contains:
  22. attrs.service: "nginx"
  23. stream: "stdout"
  24. - index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  25. when.contains:
  26. attrs.service: "nginx"
  27. stream: "stderr"
  28. - index: "docker-tomcat-access-%{[beat.version]}-%{+yyyy.MM}"
  29. when.contains:
  30. attrs.service: "tomcat"
  31. stream: "stdout"
  32. - index: "docker-tomcat-error-%{[beat.version]}-%{+yyyy.MM}"
  33. when.contains:
  34. attrs.service: "tomcat"
  35. stream: "stderr"
  36. setup.template.name: "docker"
  37. setup.template.pattern: "docker-*"
  38. setup.template.enabled: false
  39. setup.template.overwrite: true
  40. EOF
  41. 4.重启filebeat
  42. [root@db02 ~]# systemctl restart filebeat
  43. 5.生成测试数据
  44. [root@db02 ~]# curl 127.0.0.1/nginxxxxxxxxxxx
  45. [root@db02 ~]# curl 127.0.0.1:8080/dbbbbbbbbb
  46. 6.登录es-head插件查看


4.filebeat收集docker日志优化版


  1. 1.需求分析
  2. json格式并且按照下列索引生成
  3. docker-nginx-access-6.6.0-2020.02
  4. docker-tomcat-access-6.6.0-2020.02
  5. docker-tomcat-error-6.6.0-2020.02
  6. docker-nginx-error-6.6.0-2020.02
  7. 2.停止并且删除以前的容器
  8. [root@db02 ~]# docker stop $(docker ps -qa)
  9. [root@db02 ~]# docker rm $(docker ps -qa)
  10. 3.创建新容器并将容器内的日志映射出来
  11. [root@db02 ~]# docker run -d -p 80:80 -v /opt/nginx:/var/log/nginx/ nginx
  12. [root@db02 ~]# docker run -d -p 8080:80 -v /opt/tomcat:/var/log/nginx/ nginx
  13. [root@db02 ~]# ll /opt/
  14. drwxr-xr-x 2 root root 41 Mar 1 10:24 nginx
  15. drwxr-xr-x 2 root root 41 Mar 1 10:25 tomcat
  16. 4.准备json格式的nginx配置文件,将其他机器的nginx的配置文件复制到本台服务器上面
  17. [root@db02 ~]# scp 10.0.0.51:/etc/nginx/nginx.conf /root/
  18. [root@db02 ~]# ll
  19. -rw-r--r-- 1 root root 1358 Mar 1 10:27 nginx.conf
  20. #将日志格式个更改为json格式
  21. [root@db02 ~]# grep "access_log" nginx.conf
  22. access_log /var/log/nginx/access.log json;
  23. 5.拷贝到容器里并重启
  24. #查看容器id
  25. [root@db02 ~]# docker ps
  26. [root@db02 ~]# docker cp nginx.conf Nginx容器的ID:/etc/nginx/
  27. [root@db02 ~]# docker cp nginx.conf tomcat容器的ID:/etc/nginx/
  28. [root@db02 ~]# docker stop $(docker ps -qa)
  29. [root@db02 ~]# docker start Nginx容器的ID
  30. [root@db02 ~]# docker start tomcat容器的ID
  31. 6.删除ES已经存在的索引( 在 es-head 插件中删除 )
  32. 7.配置filebeat配置文件
  33. [root@db02 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  34. filebeat.inputs:
  35. - type: log
  36. enabled: true
  37. paths:
  38. - /opt/nginx/access.log
  39. json.keys_under_root: true
  40. json.overwrite_keys: true
  41. tags: ["nginx_access"]
  42. - type: log
  43. enabled: true
  44. paths:
  45. - /opt/nginx/error.log
  46. tags: ["nginx_err"]
  47. - type: log
  48. enabled: true
  49. paths:
  50. - /opt/tomcat/access.log
  51. json.keys_under_root: true
  52. json.overwrite_keys: true
  53. tags: ["tomcat_access"]
  54. - type: log
  55. enabled: true
  56. paths:
  57. - /opt/tomcat/error.log
  58. tags: ["tomcat_err"]
  59. output.elasticsearch:
  60. hosts: ["10.0.0.51:9200"]
  61. indices:
  62. - index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM}"
  63. when.contains:
  64. tags: "nginx_access"
  65. - index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM}"
  66. when.contains:
  67. tags: "nginx_err"
  68. - index: "docker-tomcat-access-%{[beat.version]}-%{+yyyy.MM}"
  69. when.contains:
  70. tags: "tomcat_access"
  71. - index: "docker-tomcat-error-%{[beat.version]}-%{+yyyy.MM}"
  72. when.contains:
  73. tags: "tomcat_err"
  74. setup.template.name: "docker"
  75. setup.template.pattern: "docker-*"
  76. setup.template.enabled: false
  77. setup.template.overwrite: true
  78. EOF
  79. 8.重启filebeat
  80. [root@db02 ~]# systemctl restart filebeat
  81. 9.访问并测试
  82. [root@db02 ~]# curl 127.0.0.1/hahaha
  83. [root@db02 ~]# curl 127.0.0.1:8080/hahaha
  84. [root@db02 ~]# cat /opt/nginx/access.log
  85. [root@db02 ~]# cat /opt/tomcat/access.log
  86. 9.es-head查看

 

使用redis优化方案

1.filebeat引入redis缓存 (redis 单节点)

filebeat收集日志传给redis,因为redis和es不能直接通信,需要中间件logstash从redis中取数据传给es,es在传给kibana展示数据

  1. 1.安装redis
  2. [root@db01 ~]# yum install redis
  3. [root@db01 ~]# sed -i 's#^bind 127.0.0.1#bind 127.0.0.1 10.0.0.51#' /etc/redis.conf
  4. [root@db01 ~]# systemctl start redis
  5. [root@db01 ~]# netstat -lntup|grep redis
  6. [root@db01 ~]# redis-cli -h 10.0.0.51
  7. 2.停止docker容器
  8. [root@db01 ~]# docker stop $(docker ps -q)
  9. 3.停止filebeat
  10. [root@db01 ~]# systemctl stop filebeat
  11. 4.删除旧的ES索引
  12. 5.确认nginx日志为json格式
  13. [root@db01 ~]# grep "access_log" nginx.conf
  14. access_log /var/log/nginx/access.log json;
  15. 6.修改filebeat配置文件
  16. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  17. filebeat.inputs:
  18. - type: log
  19. enabled: true
  20. paths:
  21. - /var/log/nginx/access.log
  22. json.keys_under_root: true
  23. json.overwrite_keys: true
  24. tags: ["access"]
  25. - type: log
  26. enabled: true
  27. paths:
  28. - /var/log/nginx/error.log
  29. tags: ["error"]
  30. output.redis:
  31. hosts: ["10.0.0.51"]
  32. keys:
  33. - key: "nginx_access"
  34. when.contains:
  35. tags: "access"
  36. - key: "nginx_error"
  37. when.contains:
  38. tags: "error"
  39. setup.template.name: "nginx"
  40. setup.template.pattern: "nginx_*"
  41. setup.template.enabled: false
  42. setup.template.overwrite: true
  43. EOF
  44. 7.重启filebaet和nginx
  45. [root@db01 ~]# systemctl restart nginx
  46. [root@db01 ~]# systemctl restart filebeat
  47. 8.生成测试数据
  48. [root@db01 ~]# curl 127.0.0.1/haha
  49. 9.检查
  50. [root@db01 ~]# redis-cli -h 10.0.0.51
  51. keys *
  52. TYPE nginx_access
  53. LLEN nginx_access
  54. LRANGE nginx_access 0 -1
  55. 确认是否为json格式
  56. 10.安装logstash
  57. [root@db01 ~]# rpm -ivh jdk-8u102-linux-x64.rpm
  58. [root@db01 ~]# rpm -ivh logstash-6.6.0.rpm
  59. 11.配置redis将数据传给logstash的配置文件
  60. [root@db01 ~]# cat >/etc/logstash/conf.d/redis.conf<<EOF
  61. input {
  62. redis {
  63. host => "10.0.0.51"
  64. port => "6379"
  65. db => "0"
  66. key => "nginx_access"
  67. data_type => "list"
  68. }
  69. redis {
  70. host => "10.0.0.51"
  71. port => "6379"
  72. db => "0"
  73. key => "nginx_error"
  74. data_type => "list"
  75. }
  76. }
  77. filter {
  78. mutate {
  79. convert => ["upstream_time", "float"]
  80. convert => ["request_time", "float"]
  81. }
  82. }
  83. output {
  84. stdout {}
  85. if "access" in [tags] {
  86. elasticsearch {
  87. hosts => "http://10.0.0.51:9200"
  88. manage_template => false
  89. index => "nginx_access-%{+yyyy.MM}"
  90. }
  91. }
  92. if "error" in [tags] {
  93. elasticsearch {
  94. hosts => "http://10.0.0.51:9200"
  95. manage_template => false
  96. index => "nginx_error-%{+yyyy.MM}"
  97. }
  98. }
  99. }
  100. EOF
  101. 12.前台启动测试
  102. [root@db01 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
  103. 13.检查
  104. logstash输出的内容有没有解析成json
  105. es-head上有没有索引生成
  106. redis里的列表数据有没有在减少
  107. 14.将logstash放在后台运行
  108. ctrl+c
  109. [root@db01 ~]# systemctl start logstash
  110. 听风扇声音,开始转的时候表示logstash启动了
  111. 15.后台启动后生成数据并在es-head中查看


2.filebeat引入redis完善方案 (使用两台服务器完成redis高可用)

因为filebeat只支持把数据传到redis单节点上面(filebeat不支持传输给redis哨兵或集群,logstash也不支持从redis哨兵或集群里读取数据),所以在filebeat和redis之前配置nginx代理服务,引用keepalives高可用来完成转换,即在redis前面使用nginx做keepalived,假如节点1挂掉了,节点2还能接收filebeat数据传给redis,logstash也连接keepalived,

  1. 1.前提条件
  2. - filebeat不支持传输给redis哨兵或集群
  3. - logstash也不支持从redis哨兵或集群里读取数据
  4. 2.安装配置redis(db01、db02安装)
  5. [root@db01 ~]# yum install redis -y
  6. [root@db02 ~]# yum install redis -y
  7. [root@db01 ~]# sed -i 's#^bind 127.0.0.1#bind 127.0.0.1 10.0.0.51#' /etc/redis.conf
  8. [root@db02 ~]# sed -i 's#^bind 127.0.0.1#bind 127.0.0.1 10.0.0.52#' /etc/redis.conf
  9. [root@db01 ~]# systemctl start redis
  10. [root@db02 ~]# systemctl start redis
  11. 3.安装配置nginx
  12. 配置官方源
  13. [root@db01 ~]# yum install nginx -y
  14. [root@db02 ~]# yum install nginx -y
  15. 放在nginx.conf最后一行的}后面,不要放在conf.d里面
  16. stream {
  17. upstream redis {
  18. server 10.0.0.51:6379 max_fails=2 fail_timeout=10s;
  19. server 10.0.0.52:6379 max_fails=2 fail_timeout=10s backup;
  20. }
  21. server {
  22. listen 6380;
  23. proxy_connect_timeout 1s;
  24. proxy_timeout 3s;
  25. proxy_pass redis;
  26. }
  27. }
  28. #检查并启动nginx
  29. [root@db01 ~]# nginx -t
  30. [root@db02 ~]# nginx -t
  31. [root@db01 ~]# systemctl start nginx
  32. [root@db02 ~]# systemctl start nginx
  33. 4.安装配置keepalived
  34. [root@db01 ~]# yum install keepalived -y
  35. [root@db02 ~]# yum install keepalived -y
  36. #db01的配置 =======(# 虚拟ip 10.0.100
  37. [root@db01 ~]# cat /etc/keepalived/keepalived.conf
  38. global_defs {
  39. router_id db01
  40. }
  41. vrrp_instance VI_1 {
  42. state MASTER
  43. interface eth0
  44. virtual_router_id 50
  45. priority 150
  46. advert_int 1
  47. authentication {
  48. auth_type PASS
  49. auth_pass 1111
  50. }
  51. virtual_ipaddress {
  52. 10.0.0.100
  53. }
  54. }
  55. #db02的配置 =======(# 虚拟ip 10.0.100
  56. [root@db02 ~]# cat /etc/keepalived/keepalived.conf
  57. global_defs {
  58. router_id db02
  59. }
  60. vrrp_instance VI_1 {
  61. state BACKUP
  62. interface eth0
  63. virtual_router_id 50
  64. priority 100
  65. advert_int 1
  66. authentication {
  67. auth_type PASS
  68. auth_pass 1111
  69. }
  70. virtual_ipaddress {
  71. 10.0.0.100
  72. }
  73. }
  74. [root@db01 ~]# systemctl start keepalived
  75. [root@db02 ~]# systemctl start keepalived
  76. [root@db01 ~]# ip addr |grep 10.0.0.100
  77. 5.测试访问能否代理到redis
  78. [root@db01 ~]# redis-cli -h 10.0.0.100 -p 6380
  79. #把db01的redis停掉,测试还能不能连接redis
  80. 6.配置filebeat #(只在一台器机器上执行即可)
  81. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  82. filebeat.inputs:
  83. - type: log
  84. enabled: true
  85. paths:
  86. - /var/log/nginx/access.log
  87. json.keys_under_root: true
  88. json.overwrite_keys: true
  89. tags: ["access"]
  90. - type: log
  91. enabled: true
  92. paths:
  93. - /var/log/nginx/error.log
  94. tags: ["error"]
  95. output.redis:
  96. hosts: ["10.0.0.100:6380"] #注意此处ip为虚拟ip:10.0.0.100
  97. keys:
  98. - key: "nginx_access"
  99. when.contains:
  100. tags: "access"
  101. - key: "nginx_error"
  102. when.contains:
  103. tags: "error"
  104. setup.template.name: "nginx"
  105. setup.template.pattern: "nginx_*"
  106. setup.template.enabled: false
  107. setup.template.overwrite: true
  108. EOF
  109. 7.测试访问filebeat能否传输到redis
  110. [root@db01 ~]# curl 127.0.0.1/haha
  111. [root@db01 ~]# redis-cli -h 10.0.0.51 #应该有数据
  112. [root@db02 ~]# redis-cli -h 10.0.0.52 #应该没数据
  113. [root@db01 ~]# redis-cli -h 10.0.0.100 -p 6380 #应该有数据
  114. 8.配置logstash
  115. [root@db01 ~]# cat >/etc/logstash/conf.d/redis.conf<<EOF
  116. input {
  117. redis {
  118. host => "10.0.0.100" #注意此处ip为虚拟ip:10.0.0.100
  119. port => "6380"
  120. db => "0"
  121. key => "nginx_access"
  122. data_type => "list"
  123. }
  124. redis {
  125. host => "10.0.0.100" #注意此处ip为虚拟ip:10.0.0.100
  126. port => "6380"
  127. db => "0"
  128. key => "nginx_error"
  129. data_type => "list"
  130. }
  131. }
  132. filter {
  133. mutate {
  134. convert => ["upstream_time", "float"]
  135. convert => ["request_time", "float"]
  136. }
  137. }
  138. output {
  139. stdout {}
  140. if "access" in [tags] {
  141. elasticsearch {
  142. hosts => "http://10.0.0.51:9200"
  143. manage_template => false
  144. index => "nginx_access-%{+yyyy.MM}"
  145. }
  146. }
  147. if "error" in [tags] {
  148. elasticsearch {
  149. hosts => "http://10.0.0.51:9200"
  150. manage_template => false
  151. index => "nginx_error-%{+yyyy.MM}"
  152. }
  153. }
  154. }
  155. EOF
  156. 9.启动测试
  157. /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
  158. #测试完毕后台启动
  159. systemctl start logstash
  160. 10.最终测试
  161. ab -n 10000 -c 100 10.0.0.100/
  162. 检查es-head上索引条目是否为10000
  163. 关闭db01的redis,在访问,测试logstash正不正常
  164. 恢复db01的redis,再测试
  165. 11.登录es-head查看日志数据


3.filebeat引入redis优化方案


  1. 1.新增加一个日志路径需要修改4个地方:
  2. - filebat 2个位置
  3. - logstash 2个位置
  4. 2.优化之后需要修改的地方2个地方
  5. - filebat 1个位置
  6. - logstash 1个位置
  7. 3.filebeat配置文件
  8. cat /etc/filebeat/filebeat.yml
  9. filebeat.inputs:
  10. - type: log
  11. enabled: true
  12. paths:
  13. - /var/log/nginx/access.log
  14. json.keys_under_root: true
  15. json.overwrite_keys: true
  16. tags: ["access"]
  17. - type: log
  18. enabled: true
  19. paths:
  20. - /var/log/nginx/error.log
  21. tags: ["error"]
  22. output.redis:
  23. hosts: ["10.0.0.100:6380"]
  24. key: "nginx_log"
  25. setup.template.name: "nginx"
  26. setup.template.pattern: "nginx_*"
  27. setup.template.enabled: false
  28. setup.template.overwrite: true
  29. 4.优化后的logstash
  30. cat /etc/logstash/conf.d/redis.conf
  31. input {
  32. redis {
  33. host => "10.0.0.100"
  34. port => "6380"
  35. db => "0"
  36. key => "nginx_log"
  37. data_type => "list"
  38. }
  39. }
  40. filter {
  41. mutate {
  42. convert => ["upstream_time", "float"]
  43. convert => ["request_time", "float"]
  44. }
  45. }
  46. output {
  47. stdout {}
  48. if "access" in [tags] {
  49. elasticsearch {
  50. hosts => "http://10.0.0.51:9200"
  51. manage_template => false
  52. index => "nginx_access-%{+yyyy.MM}"
  53. }
  54. }
  55. if "error" in [tags] {
  56. elasticsearch {
  57. hosts => "http://10.0.0.51:9200"
  58. manage_template => false
  59. index => "nginx_error-%{+yyyy.MM}"
  60. }
  61. }
  62. }

使用kafka缓存方案/kibana画图

1.ELK使用kafka作为缓存


  1. #============注意es和kibana需要先启动、zook和kafak页需要java环境=============#
  2. 0.配置密钥和host解析 #解析需要三台都配置
  3. [root@db01 ~]# cat >/etc/hosts<<EOF
  4. 10.0.0.51 db01
  5. 10.0.0.52 db02
  6. 10.0.0.53 db03
  7. EOF
  8. #生成秘钥对并分发秘钥
  9. [root@db01 ~]# ssh-keygen
  10. [root@db01 ~]# ssh-copy-id 10.0.0.52
  11. [root@db01 ~]# ssh-copy-id 10.0.0.53
  12. 1.安装zook
  13. ###db01操作
  14. [root@db01 ~]# yum install -y rsync
  15. [root@db01 ~]# cd /data/soft
  16. [root@db01 ~]# tar zxf zookeeper-3.4.11.tar.gz -C /opt/
  17. [root@db01 ~]# ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
  18. [root@db01 ~]# mkdir -p /data/zookeeper
  19. [root@db01 ~]# cat >/opt/zookeeper/conf/zoo.cfg<<EOF
  20. tickTime=2000
  21. initLimit=10
  22. syncLimit=5
  23. dataDir=/data/zookeeper
  24. clientPort=2181
  25. server.1=10.0.0.51:2888:3888
  26. server.2=10.0.0.52:2888:3888
  27. server.3=10.0.0.53:2888:3888
  28. EOF
  29. [root@db01 ~]# echo "1" > /data/zookeeper/myid
  30. [root@db01 ~]# cat /data/zookeeper/myid
  31. 1
  32. [root@db01 ~]# rsync -avz /opt/zookeeper* 10.0.0.52:/opt/
  33. [root@db01 ~]# rsync -avz /opt/zookeeper* 10.0.0.53:/opt/
  34. ###db02操作
  35. [root@db02 ~]# yum install -y rsync
  36. [root@db02 ~]# mkdir -p /data/zookeeper
  37. [root@db02 ~]# echo "2" > /data/zookeeper/myid
  38. [root@db02 ~]# cat /data/zookeeper/myid
  39. 2
  40. ###db03操作
  41. [root@db03 ~]# yum install -y rsync
  42. [root@db03 ~]# mkdir -p /data/zookeeper
  43. [root@db03 ~]# echo "3" > /data/zookeeper/myid
  44. [root@db03 ~]# cat /data/zookeeper/myid
  45. 3
  46. 2.启动zookeeper(三台机器都需要启动)
  47. [root@db01 ~]# /opt/zookeeper/bin/zkServer.sh start
  48. [root@db02 ~]# /opt/zookeeper/bin/zkServer.sh start
  49. [root@db03 ~]# /opt/zookeeper/bin/zkServer.sh start
  50. 3.检查启动是否成功(三台机器都需要启动)
  51. [root@db01 ~]# /opt/zookeeper/bin/zkServer.sh status
  52. [root@db02 ~]# /opt/zookeeper/bin/zkServer.sh status
  53. [root@db03 ~]# /opt/zookeeper/bin/zkServer.sh status
  54. #如果启动正常mode应该是
  55. 2个follower
  56. 1个leader
  57. 4.测试zookeeper通讯是否正常
  58. 在一个节点上执行,创建一个频道
  59. /opt/zookeeper/bin/zkCli.sh -server 10.0.0.51:2181
  60. create /test "hello"
  61. 在其他节点上看能否接收到
  62. /opt/zookeeper/bin/zkCli.sh -server 10.0.0.52:2181
  63. get /test
  64. 5.安装kafka
  65. ###db01操作
  66. [root@db01 ~]# cd /data/soft/
  67. [root@db01 ~]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
  68. [root@db01 ~]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
  69. [root@db01 ~]# mkdir /opt/kafka/logs
  70. [root@db01 ~]# cat >/opt/kafka/config/server.properties<<EOF
  71. broker.id=1
  72. listeners=PLAINTEXT://10.0.0.51:9092
  73. num.network.threads=3
  74. num.io.threads=8
  75. socket.send.buffer.bytes=102400
  76. socket.receive.buffer.bytes=102400
  77. socket.request.max.bytes=104857600
  78. log.dirs=/opt/kafka/logs
  79. num.partitions=1
  80. num.recovery.threads.per.data.dir=1
  81. offsets.topic.replication.factor=1
  82. transaction.state.log.replication.factor=1
  83. transaction.state.log.min.isr=1
  84. log.retention.hours=24
  85. log.segment.bytes=1073741824
  86. log.retention.check.interval.ms=300000
  87. zookeeper.connect=10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
  88. zookeeper.connection.timeout.ms=6000
  89. group.initial.rebalance.delay.ms=0
  90. EOF
  91. [root@db01 ~]# rsync -avz /opt/kafka* 10.0.0.52:/opt/
  92. [root@db01 ~]# rsync -avz /opt/kafka* 10.0.0.53:/opt/
  93. ###db02操作
  94. [root@db02 ~]# sed -i "s#10.0.0.51:9092#10.0.0.52:9092#g" /opt/kafka/config/server.properties
  95. [root@db02 ~]# sed -i "s#broker.id=1#broker.id=2#g" /opt/kafka/config/server.properties
  96. ###db03操作
  97. [root@db03 ~]# sed -i "s#10.0.0.51:9092#10.0.0.53:9092#g" /opt/kafka/config/server.properties
  98. [root@db03 ~]# sed -i "s#broker.id=1#broker.id=3#g" /opt/kafka/config/server.properties
  99. 6.先前台启动kafka测试 (三台机器都需要启动)
  100. [root@db01 ~]# /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
  101. [root@db02 ~]# /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
  102. [root@db03 ~]# /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
  103. 7.检查是否启动 (三台机器都需要启动)
  104. jps
  105. 8.kafka前台启动测试命令发送消息
  106. 创建命令
  107. /opt/kafka/bin/kafka-topics.sh --create --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --partitions 3 --replication-factor 3 --topic messagetest
  108. 测试获取所有的频道
  109. /opt/kafka/bin/kafka-topics.sh --list --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
  110. 测试发送消息
  111. /opt/kafka/bin/kafka-console-producer.sh --broker-list 10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092 --topic messagetest
  112. 其他节点测试接收
  113. /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic messagetest --from-beginning
  114. 9.测试成功之后,可以放在后台启动 (三台都启动)
  115. 按ctrl + c 停止kafka的前台启动,切换到后台启动
  116. [root@db01 ~]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  117. [root@db02 ~]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  118. [root@db03 ~]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  119. 10.配置filebeat
  120. [root@db01 ~]# cat >/etc/filebeat/filebeat.yml <<EOF
  121. filebeat.inputs:
  122. - type: log
  123. enabled: true
  124. paths:
  125. - /var/log/nginx/access.log
  126. json.keys_under_root: true
  127. json.overwrite_keys: true
  128. tags: ["access"]
  129. - type: log
  130. enabled: true
  131. paths:
  132. - /var/log/nginx/error.log
  133. tags: ["error"]
  134. output.kafka:
  135. hosts: ["10.0.0.51:9092", "10.0.0.52:9092", "10.0.0.53:9092"]
  136. topic: 'filebeat'
  137. setup.template.name: "nginx"
  138. setup.template.pattern: "nginx_*"
  139. setup.template.enabled: false
  140. setup.template.overwrite: true
  141. EOF
  142. 重启filebeat
  143. [root@db01 ~]# systemctl restart filebeat
  144. 11.访问并检查kafka里有没有收到日志
  145. [root@db01 ~]# curl 10.0.0.51
  146. #获取filebeat的频道
  147. [root@db01 ~]# /opt/kafka/bin/kafka-topics.sh --list --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
  148. #接收filebeat频道发来的消息
  149. [root@db01 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic filebeat --from-beginning
  150. 12.logstash配置文件
  151. [root@db01 ~]# cat > /etc/logstash/conf.d/kafka.conf<<EOF
  152. input {
  153. kafka{
  154. bootstrap_servers=>["10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092"]
  155. topics=>["filebeat"]
  156. group_id=>"logstash"
  157. codec => "json"
  158. }
  159. }
  160. filter {
  161. mutate {
  162. convert => ["upstream_time", "float"]
  163. convert => ["request_time", "float"]
  164. }
  165. }
  166. output {
  167. stdout {}
  168. if "access" in [tags] {
  169. elasticsearch {
  170. hosts => "http://10.0.0.51:9200"
  171. manage_template => false
  172. index => "nginx_access-%{+yyyy.MM}"
  173. }
  174. }
  175. if "error" in [tags] {
  176. elasticsearch {
  177. hosts => "http://10.0.0.51:9200"
  178. manage_template => false
  179. index => "nginx_error-%{+yyyy.MM}"
  180. }
  181. }
  182. }
  183. EOF
  184. 13.前台启动logatash测试
  185. #先清空ES以前生成的索引
  186. [root@db01 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf
  187. 生成访问日志
  188. [root@db01 ~]# curl 127.0.0.1

测试:

原数据:

1.停掉db03的zookeeper

  1. #听到zookeeper
  2. [root@db03 ~]# /opt/zookeeper/bin/zkServer.sh stop
  3. ZooKeeper JMX enabled by default
  4. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  5. Stopping zookeeper ... STOPPED
  6. #查看jps,原来3
  7. [root@db03 ~]# jps
  8. 71553 Kafka
  9. 72851 Jps
  10. #测试生成数据====db01测试
  11. [root@db01 ~]# curl 127.0.0.1
  12. db01-www
  13. #登录es-head查看

2.停掉db02的zookeeper

  1. #查看jps数据
  2. [root@db02 ~]# jps
  3. 74467 QuorumPeerMain
  4. 78053 Jps
  5. 76628 Kafka
  6. #停掉db02的zookeeper
  7. [root@db02 ~]# /opt/zookeeper/bin/zkServer.sh stop
  8. ZooKeeper JMX enabled by default
  9. Using config: /opt/zookeeper/bin/../conf/zoo.cfg
  10. Stopping zookeeper ... STOPPED
  11. #查看jps,剩两条
  12. [root@db02 ~]# jps
  13. 78210 Jps
  14. 76628 Kafka
  15. #测试生成数据====db01测试
  16. [root@db01 ~]# curl 127.0.0.1
  17. db01-www
  18. #登录es-head查看

3.停掉db01的kafa

  1. #查看jps数据
  2. [root@db01 ~]# jps
  3. 76902 Kafka
  4. 48472 Logstash
  5. 78089 Logstash
  6. 79034 Jps
  7. 74509 QuorumPeerMain
  8. #停掉db01的kafa
  9. [root@db01 ~]# /opt/kafka/bin/kafka-server-stop.sh
  10. #查看jps数据
  11. [root@db01 ~]# jps
  12. 79251 Jps
  13. 48472 Logstash
  14. 78089 Logstash
  15. 74509 QuorumPeerMain
  16. #测试生成数据====db01测试
  17. [root@db01 ~]# curl 127.0.0.1
  18. db01-www
  19. #登录es-head查看

  1. #总结kafka实验
  2. 1.前提条件
  3. - kafka和zook都是基于java的,所以需要java环境
  4. - 这俩比较吃资源,内存得够
  5. 2.安装zook注意
  6. - 每台机器的myid要不一样,而且要和配置文件里的id对应上
  7. - 启动测试,角色为leader和follower
  8. - 测试发送和接受消息
  9. 3.安装kafka注意
  10. - kafka依赖于zook,所以如果zook不正常,kafka不能工作
  11. - kafka配置文件里要配上zook的所有IP的列表
  12. - kafka配置文件里要注意,写自己的IP地址
  13. - kafka配置文件里要注意,自己的ID是zook里配置的myid
  14. - kafka启动要看日志出现started才算是成功
  15. 4.测试zook和kafka
  16. - 一端发送消息
  17. - 两端能实时接收消息
  18. 5.配置filebeat
  19. - output要配上kafka的所有的IP列表
  20. 6.配置logstash
  21. - input要写上所有的kafka的IP列表,别忘了[]
  22. - 前台启动测试成功后再后台启动
  23. 7.毁灭测试结果
  24. - 只要还有1个zook和1个kafka节点,就能正常收集日志

2.kibana画图展示

 
 
 
 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/569443
推荐阅读
相关标签
  

闽ICP备14008679号