当前位置:   article > 正文

安装nginx-kafka插件_nginx librdkafka-devel

nginx librdkafka-devel

个人博客:https://suveng.github.io/blog/​​​​​​​
安装nginx-kafka插件

nginx可以直接把数据写到kafka里面去。

1.安装git

	yum install -y git
  • 1

2.切换到/usr/local/src目录,然后将kafka的c客户端源码clone到本地

	cd /usr/local/src
	git clone https://github.com/edenhill/librdkafka
  • 1
  • 2

3.进入到librdkafka,然后进行编译

	cd librdkafka
	yum install -y gcc gcc-c++ pcre-devel zlib-devel
	./configure
	make && make install
  • 1
  • 2
  • 3
  • 4

4.安装nginx整合kafka的插件,进入到/usr/local/src,clone nginx整合kafka的源码

	cd /usr/local/src
	git clone https://github.com/brg-liuwei/ngx_kafka_module
  • 1
  • 2

5.进入到nginx的源码包目录下 (编译nginx,然后将将插件同时编译)

	cd /usr/local/src/nginx-1.12.2
	./configure --add-module=/usr/local/src/ngx_kafka_module/
	make
	make install
  • 1
  • 2
  • 3
  • 4

6.修改nginx的配置文件,详情请查看当前目录的nginx.conf


#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    #access_log  logs/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
    #gzip  on;
    
    kafka;
    kafka_broker_list node-1.xiaoniu.com:9092 node-2.xiaoniu.com:9092 node-3.xiaoniu.com:9092; 	
    
    server {
        listen       80;
        server_name  node-6.xiaoniu.com;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;

    	location = /kafka/track {
                kafka_topic track;
        }

    	location = /kafka/user {
                kafka_topic user;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59

主要是添加kafka 和 location,在liuwei的git仓库里面的用法说明有提到。

7.启动zk和kafka集群(创建topic)

	/bigdata/zookeeper-3.4.9/bin/zkServer.sh start
	/bigdata/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /bigdata/kafka_2.11-0.10.2.1/config/server.properties
  • 1
  • 2

8.启动nginx,报错,找不到kafka.so.1的文件

	error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory
  • 1

原因是没有加载库编译

9.加载so库

	echo "/usr/local/lib" >> /etc/ld.so.conf
	ldconfig
  • 1
  • 2

10.测试前把nginx开启,记得要ping通才能测试,而且开启相应的端口,开始测试:向nginx中写入数据,然后观察kafka的消费者能不能消费到数据

	curl localhost/kafka/track -d "message send to kafka topic"
  • 1
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/192833?site
推荐阅读
相关标签
  

闽ICP备14008679号