当前位置:   article > 正文

Linux:【Kafka一】Centos7安装Kafka

centos7安装kafka

目录

一.先安装zookeeper并启动

二.安装kafka


一.先安装zookeeper并启动

1.下载

 https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.8.2/apache-zookeeper-3.8.2-bin.tar.gz

2.上传到Linux任意目录下

3.解压到/user/local/文件夹下

[root@localhost app]# tar -zxvf ./apache-zookeeper-3.8.2-bin.tar.gz -C /usr/local/

4.切换到/usr/local/下 

[root@localhost app]# cd /usr/local/

5.修改文件改名

 [root@localhost local]# mv ./apache-zookeeper-3.8.2-bin/ ./apache-zookeeper-3.8.2/

6.在安装目录下创建文件夹

  1.  [root@localhost local]# cd ./apache-zookeeper-3.8.2/
  2.   [root@localhost apache-zookeeper-3.8.2]# mkdir data
  3.   [root@localhost apache-zookeeper-3.8.2]# mkdir logs
  4.   [root@localhost apache-zookeeper-3.8.2]# ls
  5.   bin  conf  data  docs  lib  LICENSE.txt  logs  NOTICE.txt  README.md  README_packaging.md

7.修改配置文件

  1. [root@localhost conf]# vim ./zoo_sample.cfg
  2.     # The number of milliseconds of each tick
  3.     tickTime=2000
  4.     # The number of ticks that the initial
  5.     # synchronization phase can take
  6.     initLimit=10
  7.     # The number of ticks that can pass between
  8.     # sending a request and getting an acknowledgement
  9.     syncLimit=5
  10.     # the directory where the snapshot is stored.
  11.     # do not use /tmp for storage, /tmp here is just
  12.     # example sakes.
  13.     dataDir=/usr/local/apache-zookeeper-3.8.2/data     # 修改,按照刚新建文件夹的目录
  14.     dataLogDir=/usr/local/apache-zookeeper-3.8.2/log   # 新增,按照刚新建文件夹的目录
  15.     # the port at which the clients will connect
  16.     clientPort=2181
  17.     # the maximum number of client connections.
  18.     # increase this if you need to handle more clients
  19.     #maxClientCnxns=60
  20.     #
  21.     # Be sure to read the maintenance section of the
  22.     # administrator guide before turning on autopurge.
  23.     #
  24.     # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
  25.     #
  26.     # The number of snapshots to retain in dataDir
  27.     #autopurge.snapRetainCount=3
  28.     # Purge task interval in hours
  29.     # Set to "0" to disable auto purge feature
  30.     #autopurge.purgeInterval=1
  31.     ## Metrics Providers
  32.     #
  33.     # https://prometheus.io Metrics Exporter
  34.     #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
  35.     #metricsProvider.httpHost=0.0.0.0
  36.     #metricsProvider.httpPort=7000
  37.     #metricsProvider.exportJvmInfo=true

8.修改配置文件名称

 [root@localhost conf]# mv ./zoo_sample.cfg ./zoo.cfg

7.启动zookeeper

  1.  [root@localhost conf]# /usr/local/apache-zookeeper-3.8.2/bin/zkServer.sh start
  2.     /usr/bin/java
  3.     ZooKeeper JMX enabled by default
  4.     Using config: /usr/local/apache-zookeeper-3.8.2/bin/../conf/zoo.cfg
  5.     Starting zookeeper ... STARTED

9.查看zookeeper状态

  1.  [root@localhost conf]# /usr/local/apache-zookeeper-3.8.2/bin/zkServer.sh status
  2.     /usr/bin/java
  3.     ZooKeeper JMX enabled by default
  4.     Using config: /usr/local/apache-zookeeper-3.8.2/bin/../conf/zoo.cfg
  5.     Client port found: 2181. Client address: localhost. Client SSL: false.
  6.     Mode: standalone

10.连接服务

 [root@localhost conf]# /usr/local/apache-zookeeper-3.8.2/bin/zkCli.sh

    服务停止命令:扩展命令

  1.  [root@localhost conf]# /usr/local/apache-zookeeper-3.8.2/bin/zkServer.sh stop
  2.     /usr/bin/java
  3.     ZooKeeper JMX enabled by default
  4.     Using config: /usr/local/apache-zookeeper-3.8.2/bin/../conf/zoo.cfg
  5.     Stopping zookeeper ... STOPPED

二.安装kafka

1.下载kafka
 https://kafka.apache.org/downloads,下载的是3.5.0:Scala 2.12  - kafka_2.12-3.5.1.tgz (asc, sha512)

2.解压到/usr/local文件夹下,并改文件夹的名称
 

 [root@localhost app]# tar -zxvf ./kafka_2.12-3.5.1.tgz -C /usr/local/

3.进入Kafka,新建日志文件夹

  1.  [root@localhost local]# cd /usr/local/kafka_2.12-3.5.1
  2.  [root@localhost kafka_2.12-3.5.1]# mkdir ./log/

4.修改配置文件

  1.  [root@localhost kafka_2.12-3.5.1]# vim ./config/server.properties
  2.  需要改一下信息:
  3.     # broker的编号,如果集群中有多个broker,则每个broker的编号需要设置的不同,同zookeeper一致就可以
  4.     broker.id=0
  5.     # 存放消息日志文件地址
  6.     log.dirs=/usr/local/kafka_2.12-3.5.1/log/
  7.     # broker对外提供服务的入口地址
  8.     advertised.listeners=PLAINTEXT://192.168.154.128:9092  (192.168.154.128是我虚拟机的IP地址)

5.单机形式启动kafka

   [root@localhost kafka_2.12-3.5.1]# bin/kafka-server-start.sh -daemon config/server.properties

6.停止kafka

 [root@localhost kafka_2.12-3.5.1]bin/kafka-server-stop.sh

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/557396
推荐阅读
相关标签
  

闽ICP备14008679号