赞
踩
小白的我最近开始学一些简单架构,了解架构。
kafka+zookeeper+filebeat
生产者-消费者
1、准备好3台虚拟机搭建nginx和kafka集群
2、配置好静态ip地址
备注:NetworkManger 或者 network 服务只运行一个
启动/停止/重启: systemctl start/stop/restart NetworkManager
systemctl管理的服务,配置文件在 /usr/lib/systemd/system 下, 以.service结尾的配置文件
etc/systemd/system/multi-user.target.wants 开机自启的服务配置目录
配置好dns:
[root@nginx-kafka03 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114
3、修改主机名
vim /etc/hosthname
hostname -F /etc/hostname
或者
hostnamectl set-hostname xxx
4、每一台机器上都写好域名解析
vim /etc/hosts
192.168.1.94 nginx-kafka01
192.168.1.95 nginx-kafka02
192.168.1.96 nginx-kafka03
5、安装基本软件
yum install wget lsof vim -y
6、安装时间同步服务
yum -y install chrony
systemctl enable chronyd #设置开机自启 disable 关闭开机自启
systemctl start chronyd
设置时区:
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
7、关闭防火墙
关闭防火墙:
[root@nginx-kafka01 ~]# systemctl stop firewalld
[root@nginx-kafka01 ~]# systemctl disable firewalld
关闭selinux:
vim /etc/selinux/config
SELINUX=disabled
selinux关闭 需要重启机器
[root@nginx-kafka03 ~]# getenforce
Disabled
selinux是linux系统内核里一个跟安全相关的子系统
规则非常繁琐 一般日常工作里都是关闭的
安装好epel源和nginx:
yum install epel-release -y
yum install nginx -y
启动:systemctl start nginx
设置开机自启:systemctl enable nginx
配置文件:
[root@nginx-kafka01 ~]# cd /etc/nginx/
[root@nginx-kafka01 nginx]# ls
conf.d fastcgi.conf.default koi-utf mime.types.default scgi_params uwsgi_params.default
default.d fastcgi_params koi-win nginx.conf scgi_params.default win-utf
fastcgi.conf fastcgi_params.default mime.types nginx.conf.default uwsgi_params
主配置文件: nginx.conf
nginx.conf基本内容结构:
... #全局块 events { #events块 ... } http #http块 { ... #http全局块 server #server块 { ... #server全局块 location [PATTERN] #location块 { ... } location [PATTERN] { ... } } server { ... } ... #http全局块 }
1、全局块:配置影响nginx全局的指令。一般有运行nginx服务器的用户组,nginx进程pid存放路径,日志存放路径,配置文件引入,允许生成worker process数等。
2、events块:配置影响nginx服务器或与用户的网络连接。有每个进程的最大连接数,选取哪种事件驱动模型处理连接请求,是否允许同时接受多个网路连接,开启多个网络连接序列化等。
3、http块:可以嵌套多个server,配置代理,缓存,日志定义等绝大多数功能和第三方模块的配置。如文件引入,mime-type定义,日志自定义,是否使用sendfile传输文件,连接超时时间,单连接请求数等。
4、server块:配置虚拟主机的相关参数,一个http中可以有多个server。
5、location块:配置请求的路由,以及各种页面的处理情况
配置文件修改:
vim nginx.conf
将
listen 80 default_server;
修改成:
listen 80;
若一直是下面这样,则无需修改。
属于自己编辑的nginx配置文件,增加扩展性
vim /etc/nginx/conf.d/sc.conf
server {
listen 80 default_server;
server_name www.sc.com;
root /usr/share/nginx/html;
access_log /var/log/nginx/sc/access.log main;
location / {
}
}
nginx语法检测:
[root@nginx-kafka01 html]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: [emerg] open() "/var/log/nginx/sc/access.log" failed (2: No such file or directory)
nginx: configuration file /etc/nginx/nginx.conf test failed
分析后:
[root@nginx-kafka01 html]# mkdir /var/log/nginx/sc
[root@nginx-kafka01 html]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
重新加载nginx
nginx -s reload
nginx做web网站的话 只能够支持静态网页展示 – html
[root@nginx-kafka01 kafka_2.12-2.8.1]# cd /usr/share/nginx/html/
[root@nginx-kafka01 html]# ls
404.html 50x.html en-US icons img index.html nginx-logo.png poweredby.png sc.html
[root@nginx-kafka01 html]# cat sc.html
this is sc html
1、安装:
安装java:yum install java wget -y
安装kafka: wget https://mirrors.bfsu.edu.cn/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz
解包:tar xf kafka_2.12-2.8.1.tgz
使用自带的zookeeper集群配置
安装zookeeper:wget https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.4-bin.tar.gz
安装kafka和zookeeper,我估计都装不了,我直接在windows上进官网下载,然后用Xtfp传进我的三台机器:
https://kafka.apache.org/downloads
https://downloads.apache.org/zookeeper/zookeeper-3.6.4/
[root@nginx-kafka01 opt]# ls
apache-zookeeper-3.6.3-bin.tar.gz kafka_2.12-2.8.1.tgz
记得解压!
2、配置kafka
修改config/server.properties:
都要改broker.id=分别1 2 3
listeners=PLAINTEXT://nginx-kafka01:9092三台机器的名字要对应。
broker.id=1
listeners=PLAINTEXT://nginx-kafka01:9092
zookeeper.connect=192.168.1.94:2181,192.168.1.95:2181,192.168.1.96:2181
3、配置zk
进入/opt/apache-zookeeper-3.6.3-bin/confs
cd /opt/apache-zookeeper-3.6.3-bin/confs
cp zoo_sample.cfg zoo.cfg
修改zoo.cfg, 添加如下三行:
server.1=192.168.1.94:3888:4888
server.2=192.168.1.95:3888:4888
server.3=192.168.1.96:3888:4888
3888和4888都是端口 一个用于数据传输,一个用于检验存活性和选举
创建/tmp/zookeeper目录 ,在目录中添加myid文件,文件内容就是本机指定的zookeeper id内容
如:在192.168.0.94机器上
echo 1 > /tmp/zookeeper/myid
[root@nginx-kafka01 conf]# mkdir /tmp/zookeeper
[root@nginx-kafka01 conf]# echo 1 > /tmp/zookeeper/myid
[root@nginx-kafka01 conf]# cd ../bin
启动zookeeper:
[root@nginx-kafka01 bin]# ./zkServer.sh start
开启zk和kafka的时候,一定是先启动zk,再启动kafka
关闭服务的时候,kafka先关闭,再关闭zk
#查看
[root@nginx-kafka03 bin]# ./zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/apache-zookeeper-3.6.4-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@nginx-kafka03 bin]# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/apache-zookeeper-3.6.4-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
启动kafka:
bin/kafka-server-start.sh -daemon config/server.properties
[root@nginx-kafka03 kafka_2.12-2.8.1]# bin/kafka-server-start.sh -daemon config/server.properties
[root@nginx-kafka03 kafka_2.12-2.8.1]# pwd
/opt/kafka_2.12-2.8.1
zookeeper使用:
运行bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 1] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, sc, zookeeper]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 3] create /sc/yy
Created /sc/yy
[zk: localhost:2181(CONNECTED) 4] ls /sc
[page, xx, yy]
[zk: localhost:2181(CONNECTED) 5] set /sc/yy 90
[zk: localhost:2181(CONNECTED) 6] get /sc/yy
90
测试
创建topic
bin/kafka-topics.sh --create --zookeeper 192.168.1.95:2181 --replication-factor 3 --partitions 3 --topic sc
查看topic
bin/kafka-topics.sh --list --zookeeper 192.168.1.95:2181
创建生产者
[root@nginx-kafka01 kafka_2.12-2.8.1]# bin/kafka-console-producer.sh --broker-list 192.168.1.94:9092 --topic sc
>xixi
>haha
>didi
>hello
>world!!!!!
>
创建消费者
[root@nginx-kafka01 kafka_2.12-2.8.1]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.96:9092 --topic sc --from-beginning
haha
hello
didi
xixi
world!!!!!
tmux同步的效果:
按 ctrl+b 再按:
接着输入指令:set-window-option synchronize-panes on
连接zk: bin/zkCli.sh [zk: localhost:2181(CONNECTED) 0] ls / [admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper] [zk: localhost:2181(CONNECTED) 1] ls /brokers [ids, seqid, topics] [zk: localhost:2181(CONNECTED) 2] ls /brokers/ids [0, 1, 2] [zk: localhost:2181(CONNECTED) 3] get /brokers/ids null [zk: localhost:2181(CONNECTED) 4] get /brokers/ids/0 {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://nginx-kafka02:9092"],"jmx_port":9999,"features":{},"host":"nginx-kafka02","timestamp":"1642300427923","port":9092,"version":5} [zk: localhost:2181(CONNECTED) 5] ls /brokers/ids/0 [] [zk: localhost:2181(CONNECTED) 6] get /brokers/ids/0 {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://nginx-kafka02:9092"],"jmx_port":9999,"features":{},"host":"nginx-kafka02","timestamp":"1642300427923","port":9092,"version":5}
zookeeper 分布式,开源的配置管理服务 etcd
1、安装
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2、编辑 vim /etc/yum.repos.d/fb.repo
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
3、yum安装
yum install filebeat -y
rpm -qa |grep filebeat
#可以查看filebeat有没有安装 rpm -qa 是查看机器上安装的所有软件包
rpm -ql filebeat
查看filebeat安装到哪里去了,牵扯的文件有哪些
4、设置开机自启
systemctl enable filebeat
#ymal格式
{
“filebeat.inputs”: [
{ “type”:“log”,
“enabled”:true,
“paths”:[“/var/log/nginx/sc_access”]
},
],
}
#配置
先备份:
再修改配置文件/etc/filebeat/filebeat.yml
cd /etc/filebeat/
[root@nginx-kafka01 filebeat]# cp filebeat.yml filebeat.yml.bak
[root@nginx-kafka01 filebeat]# >filebeat.yml
[root@nginx-kafka01 filebeat]# vim filebeat.yml
添加:(复制粘贴即可,yaml文件内容格式较严格)
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/nginx/sc/access.log
#==========------------------------------kafka-----------------------------------
output.kafka:
hosts: ["192.168.1.94:9092","192.168.1.95:9092","192.168.1.96:9092"]
topic: nginxlog
keep_alive: 10s
#创建主题nginxlog
bin/kafka-topics.sh --create --zookeeper 192.168.1.94:2181 --replication-factor 3 --partitions 1 --topic nginxlog
[root@nginx-kafka01 kafka_2.12-2.8.1]# bin/kafka-topics.sh --create --zookeeper 192.168.1.94:2181 --replication-factor 3 --partitions 1 --topic nginxlog
Created topic nginxlog.
#启动服务:
systemctl start filebeat
[root@nginx-kafka01 kafka_2.12-2.8.1]# ps -ef |grep filebeat
root 10606 1 0 09:17 ? 00:00:00 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
root 10616 8759 0 09:17 pts/1 00:00:00 grep --color=auto filebeat
测试
消费一下:
[root@nginx-kafka01 kafka_2.12-2.8.1]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.94:9092 --topic nginxlog --from-beginning
成功!
关于dns域名解析:
本地hosts文件:C:\Windows\System32\drivers\etc
dns解析:
1、浏览器的缓存
2、本地hosts文件 --linux(/etc/hosts)
3、找本地域名服务器 – linux(/etc/resolv.conf)
遇到的困难及解决方案:
1、当时在配置文件(filebeat.yml)的输出导致我的服务启动不了,查看了日志,和官网,知道原来filebeat只能有一个输出,把另外一个输出(output)去掉,服务才正常启动。
2、在yaml文件内容编写时,因为yml格式,缩进问题(或者少了“-”),导致我的配置一直没有生效。
收获:
1、学习,掌握了这部分知识
2、之前对服务这个概念不是很清楚,现在更清楚了一些
3、了解了这个网络,数据的走向。
4、在项目的实施过程中,对信息,对文件备份的要求都提高了。
5、团队能力、找资料能力。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。