当前位置:   article > 正文

hadoop-zookeeper-hive-flume-kafka-hbase集群,解决登录报错,Cli.sh启动失败,Error: java.lang.UnsupportedClassVersion_grant all on *.* to hive@localhist identified by '

grant all on *.* to hive@localhist identified by 'hive'; error 1064 (42000):

1.克隆虚拟机之前清理垃圾

[root@hadoop201 ~]# rm -rf anaconda-ks.cfg install.log install.log.syslog

    2.选中“克隆虚拟机”:“右键→管理→克隆”

    弹窗操作
    1下一步
    2克隆自:虚拟机中的当前状态
    3创建完整克隆
    4虚拟机命名,选择存储位置
    5完成

    3.配置IP

    3.1获取地址:vi /etc/udev/rules.d/70-persistent-net.rules

    删除第一个eth0,改第二个eth0为eth1SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:2e:99:a1", ATTR{type}=="1", KERNEL=="eth*", NAME=="eth0"~             

      3.2修改本机IP:vi /etc/sysconfig/network-scripts/ifcfg-eth0

      
      
      DEVICE=eth0
      
      HWADDR=00:0c:29:b9:4d:21
      
      TYPE=Ethernet
      
      UUID=73c15477-5d31-4956-8be7-2809e1d204db
      
      ONBOOT=yes
      
      NM_CONTROLLED=yes
      
      BOOTPROTO=static
      
       
      
      IPADDR=192.168.1.208
      
      GATEWAY=192.168.1.3
      
      DNS1=192.168.1.3
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24

      3.3配置主机名

      |- 当前主机名:hostname

      hadoop206.cevent.com

      |- 修改网络主机配置:vi /etc/sysconfig/network

      NETWORKING=yes

      HOSTNAME=hadoop208.cevent.com

      |- 修改主机配置:vi /etc/hosts

      127.0.0.1
      localhost localhost.localdomain localhost4 localhost4.localdomain4

      ::1
      localhost localhost.localdomain localhost6 localhost6.localdomain6

      192.168.1.201 hadoop201.cevent.com

      192.168.1.202 hadoop202.cevent.com

      192.168.1.203 hadoop203.cevent.com

      3.4重启生效

      (1)sync

      (2)reboot

      3.5登录报错,修改windows下hosts

      路径:C:\Windows\System32\drivers\etc\hosts

      
      
      # Copyright (c) 1993-2009 Microsoft Corp.
      
      #
      
      # This is a sample HOSTS file used by
      Microsoft TCP/IP for Windows.
      
      #
      
      # This file contains the mappings of IP
      addresses to host names. Each
      
      # entry should be kept on an individual
      line. The IP address should
      
      # be placed in the first column followed by
      the corresponding host name.
      
      # The IP address and the host name should
      be separated by at least one
      
      # space.
      
      #
      
      # Additionally, comments (such as these)
      may be inserted on individual
      
      # lines or following the machine name
      denoted by a '#' symbol.
      
      #
      
      # For example:
      
      #
      
      #     
      102.54.94.97    
      rhino.acme.com          # source server
      
      #      
      38.25.63.10     x.acme.com              # x client host
      
       
      
      # localhost name resolution is handled
      within DNS itself.
      
      #       127.0.0.1       localhost
      
      #       ::1             localhost
      
       
      
      127.0.0.1 www.vmix.com
      
      192.30.253.112 github.com
      
      151.101.88.249 github.global.ssl.fastly.net
      
      127.0.0.1 
      www.xmind.net
      
      192.168.1.201 hadoop201.cevent.com
      
      192.168.1.202 hadoop202.cevent.com
      
      192.168.1.203 hadoop203.cevent.com
      
      192.168.1.204 hadoop204.cevent.com
      
      192.168.1.205 hadoop205.cevent.com
      
      192.168.1.206 hadoop206.cevent.com
      
      192.168.1.207 hadoop207.cevent.com
      
      192.168.1.208 hadoop208.cevent.com
      
      192.168.1.209 hadoop209.cevent.com
      
      192.168.1.210 hadoop210.cevent.com
      
      192.168.1.211 hadoop211.cevent.com
      
      192.168.1.212 hadoop212.cevent.com
      
       
      
      192.168.1.1 windows10.microdone.cn
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92

      3.6ssh登录报错

      
      
      [cevent@hadoop207 ~]$ sudo ssh
      hadoop209.cevent.com
      
      [sudo] password for cevent: 
      
      ssh: Could not resolve hostname
      hadoop209.cevent.com: Name or service not known
      
       
      
      修改hosts: vi /etc/hosts
      
      127.0.0.1  
      localhost localhost.localdomain localhost4 localhost4.localdomain4
      
      ::1        
      localhost localhost.localdomain localhost6 localhost6.localdomain6
      
      192.168.1.201 hadoop201.cevent.com
      
      192.168.1.202 hadoop202.cevent.com
      
      192.168.1.203 hadoop203.cevent.com
      
      192.168.1.204 hadoop204.cevent.com
      
      192.168.1.205 hadoop205.cevent.com
      
      192.168.1.206 hadoop206.cevent.com
      
      192.168.1.207 hadoop207.cevent.com
      
      192.168.1.208 hadoop208.cevent.com
      
      192.168.1.209 hadoop209.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38

      3.7无秘钥登录(本机也需要登录)

      (1)ssh登录A

      [cevent@hadoop210 ~]$ cd .ssh/

      [cevent@hadoop210 .ssh]$ ll

      总用量 8

      -rw-------. 1 cevent cevent 409 3月 24 09:40 authorized_keys

      -rw-r–r--. 1 cevent cevent 1248 4月
      28 17:16 known_hosts

      [cevent@hadoop207 .ssh]$ ssh
      hadoop208.cevent.com

      The authenticity of host
      ‘hadoop208.cevent.com (192.168.1.208)’ can’t be established.

      RSA key fingerprint is
      fe:07:91:9c:00:5d:09:2c:48:bb:d5:53:9f:09:6c:34.

      Are you sure you want to continue
      connecting (yes/no)? yes

      Warning: Permanently added
      ‘hadoop208.cevent.com,192.168.1.208’ (RSA) to the list of known hosts.

      cevent@hadoop208.cevent.com’s password:

      Last login: Mon Jun 15 16:06:58 2020 from
      192.168.1.1

      [cevent@hadoop208 ~]$ exit

      logout

      (2)ssh登录B

      [cevent@hadoop207 ~]$ ssh
      hadoop209.cevent.com

      The authenticity of host ‘hadoop209.cevent.com
      (192.168.1.209)’ can’t be established.

      RSA key fingerprint is
      fe:07:91:9c:00:5d:09:2c:48:bb:d5:53:9f:09:6c:34.

      Are you sure you want to continue
      connecting (yes/no)? yes

      Warning: Permanently added
      ‘hadoop209.cevent.com,192.168.1.209’ (RSA) to the list of known hosts.

      cevent@hadoop209.cevent.com’s password:

      Last login: Mon Jun 15 17:01:11 2020 from
      192.168.1.1

      [cevent@hadoop209 ~]$ exit

      logout

      (3)查看登录记录

      [cevent@hadoop207 ~]$ cd

      [cevent@hadoop207 ~]$ ls -al

      总用量 188

      drwx------. 26 cevent cevent 4096 6月 15 17:03 .

      drwxr-xr-x.
      3 root root 4096 4月 30 09:15 …

      drwxrwxr-x.
      2 cevent cevent 4096 5月
      7 10:43 .abrt

      -rw-------.
      1 cevent cevent 17232 6月 15 17:05 .bash_history

      -rw-r–r--.
      1 cevent cevent 18 5月
      11 2016 .bash_logout

      -rw-r–r--.
      1 cevent cevent 176 5月
      11 2016 .bash_profile

      -rw-r–r--.
      1 cevent cevent 124 5月
      11 2016 .bashrc

      drwxrwxr-x.
      2 cevent cevent 4096 5月
      7 17:32 .beeline

      drwxr-xr-x.
      3 cevent cevent 4096 5月
      7 10:43 .cache

      drwxr-xr-x.
      5 cevent cevent 4096 5月
      7 10:43 .config

      drwx------.
      3 cevent cevent 4096 5月
      7 10:43 .dbus

      -rw-------.
      1 cevent cevent 16 5月
      7 10:43 .esd_auth

      drwx------.
      4 cevent cevent 4096 6月
      14 22:13 .gconf

      drwx------.
      2 cevent cevent 4096 6月
      14 23:14 .gconfd

      drwxr-xr-x.
      5 cevent cevent 4096 5月
      7 10:43 .gnome2

      drwxrwxr-x.
      3 cevent cevent 4096 5月
      7 10:43 .gnote

      drwx------.
      2 cevent cevent 4096 6月
      14 22:13 .gnupg

      -rw-rw-r–.
      1 cevent cevent 195 6月
      14 22:13 .gtk-bookmarks

      drwx------.
      2 cevent cevent 4096 5月
      7 10:43 .gvfs

      -rw-rw-r–.
      1 cevent cevent 589 6月
      12 13:42 .hivehistory

      -rw-------.
      1 cevent cevent 620 6月
      14 22:13 .ICEauthority

      -rw-r–r--.
      1 cevent cevent 874 6月
      14 23:14 .imsettings.log

      drwxr-xr-x.
      3 cevent cevent 4096 5月
      7 10:43 .local

      drwxr-xr-x.
      4 cevent cevent 4096 3月
      13 01:51 .mozilla

      -rw-------.
      1 cevent cevent 214 5月
      7 13:37 .mysql_history

      drwxr-xr-x.
      2 cevent cevent 4096 5月
      7 10:43 .nautilus

      drwx------.
      2 cevent cevent 4096 5月 7 10:43 .pulse

      -rw-------.
      1 cevent cevent 256 5月
      7 10:43 .pulse-cookie

      drwx------.
      2 cevent cevent 4096 4月
      30 15:20 .ssh

      [cevent@hadoop207 ~]$ cd .ssh/

      [cevent@hadoop207 .ssh]$ ll

      总用量 16

      -rw-------. 1 cevent cevent 409 4月 30 15:20 authorized_keys

      -rw-------. 1 cevent cevent 1675 4月
      30 15:20 id_rsa

      -rw-r–r--. 1 cevent cevent 409 4月 30 15:20 id_rsa.pub

      -rw-r–r--. 1 cevent cevent 832 6月 15 17:08 known_hosts

      查看登录记录

      [cevent@hadoop207 .ssh]$ vi known_hosts

      3.8生成ssh-key

      
      
      [cevent@hadoop207 .ssh]$ ssh-keygen -t rsa
      
      Generating public/private rsa key pair.
      
      Enter file in which to save the key
      (/home/cevent/.ssh/id_rsa): cevent
      
      Enter passphrase (empty for no passphrase):
      
      
      Enter same passphrase again: 
      
      Your identification has been saved in
      cevent.
      
      Your public key has been saved in
      cevent.pub.
      
      The key fingerprint is:
      
      1c:5a:1a:d2:e4:3b:fe:36:39:df:95:b3:75:85:0f:af
      cevent@hadoop207.cevent.com
      
      The key's randomart image is:
      
      +--[ RSA 2048]----+
      
      |     
      .          |
      
      |    
      +           |
      
      |   
      . + o        |
      
      |    
      . B .     . |
      
      |     
      = S     o .|
      
      |    
      . .       =.|
      
      |     
      .  .    + =|
      
      |      
      .=  . . =.|
      
      |      
      ..+. . E  |
      
      +-----------------+
      
      (6)复制key-id(本机也需要配)
      
      [cevent@hadoop207 .ssh]$ ssh-copy-id
      hadoop208.cevent.com
      
      [cevent@hadoop207 .ssh]$ ssh-copy-id
      hadoop209.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66

      3.9测试ssh

      
      
      [cevent@hadoop207 .ssh]$ ssh
      hadoop208.cevent.com
      
      Last login: Mon Jun 15 18:23:00 2020 from
      hadoop207.cevent.com
      
      [cevent@hadoop208 ~]$ exit
      
      logout
      
      Connection to hadoop208.cevent.com closed.
      
      [cevent@hadoop207 .ssh]$ ssh
      hadoop209.cevent.com
      
      Last login: Mon Jun 15 17:08:11 2020 from
      hadoop207.cevent.com
      
      [cevent@hadoop209 ~]$ exit
      
      logout
      
       
      
      8.查看其他虚拟机无秘钥登录
      
      [cevent@hadoop208 ~]$ cd
      
      [cevent@hadoop208 ~]$ ls -al
      
      [cevent@hadoop208 ~]$ cd .ssh/
      
      [cevent@hadoop208 .ssh]$ ll
      
      总用量 8
      
      -rw-------. 1 cevent cevent  818 6月  15 18:24 authorized_keys
      
      -rw-r--r--. 1 cevent cevent 1664 6月 
      16 17:56 known_hosts
      
      [cevent@hadoop208 .ssh]$ vi authorized_keys
      
      
      ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAyRWmouFLr4b6orfQrWqtlukJ1orkeYcqM6orH1JMOMo1BLYY/Bop2ZIAHahUxGgUdRlZ5mFkGHut1A7h/VdGLBdegyZXQwxwl6Cx67XIsQWRwUgZWXbujg+qV9irPE7WvxF5e3FvJGfbmh7boPk+q/Hsk6rgZ3k9qrEDx4vhv7eL+Ostt2D8tV4UbRReUNl3yI9bt4P/S7ARBpelFtB4p9drbGSxjtH0sNKBHiDAcAV+MOSLz21+WlYr2x58jAZc37UXi/qYfosgc+u5GJ88z/kEI+1YqXBX11FFiRWZcI2aiLweri5WtHH0hoEZGrTXBeShuWQzegx9/E0upPlsfw==
      cevent@hadoop202.cevent.com
      
      ssh-rsa
      AAAAB3NzaC1yc2EAAAABIwAAAQEA0Fe9XV0baD7RPiGkIf+ZMoMPOaCF445aAvaJdGt8wuegkxJjqPMTAop79xcA7AY/vFS7PjpllM162t/lVoCozGHK1iOfElObiLo6+pxBcwfVYnEUlzAz/L0Ngpss54Eb48xOq068gcKcDAZrNbdtxDkTgzHFttcWpB7j++gRXrfB9O9HxKcRObu16sBM8tLmLF4M+tvxTC/Ko/amnrOvi3/AyCtxH1sRumqUiu9buDJAFAgV1Y+s7CR7GORGIkDkmHr9e3O5UMpNXTgnfIaCPdNzn6qRTUM/Sb5KAkkMBb3MY5NgbOPDvFwkPlG8xcFS5Ua8Arq58n8kwa2dyy94kQ==
      cevent@hadoop207.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53

      3.10rsync同步文件测试

      (1)新建文件

      [cevent@hadoop207 module]$ vim rsync.txt

      kakaxi

      (2)执行同步

      [cevent@hadoop207 module]$ rsync -rvl
      rsync.txt cevent@hadoop208.cevent.com:/opt/module/

      10.xsync同步任务文件

      (1)在/usr/local/bin目录下创建xsync文件,文件内容如下:

      #!/bin/bash

      #1 获取输入参数的个数,如果没有参数,直接退出,$#获取参数个数

      pcount=$#

      if((pcount==0)); then

      #如果没有参数,返回no args,退出,程序关闭

      echo no args;

      exit;

      fi

      #2 获取文件名称:$1获取第一个参数,basename+ 路径1/路径2,获取路径最后一个值名称

      p1=$1

      fname=basename $p1

      echo fname=$fname

      #3 获取上级目录到:绝对路径

      #获取文件路径:$(dirname $p1)

      #cd -P 进入绝对路径 pwd获取路径

      pdir=cd -P $(dirname $p1); pwd

      echo pdir=$pdir

      #4 获取当前用户名称

      user=whoami

      #5 循环

      for((host=207; host<210; host++)); do

      #echo p d i r / pdir/ pdir/fname u s e r @ h a d o o p user@hadoop user@hadoophost:$pdir

      echo --------------- hadoop$host.cevent.com ----------------

      #拼接路径用户

      rsync -rvl p d i r / pdir/ pdir/fname u s e r @ h a d o o p user@hadoop user@hadoophost.cevent.com:$pdir

      done

      3.11测试同步,默认同步在相同路径下

      
      
      [cevent@hadoop210 bin]$ chmod 777 xsync  修改文件权限
      
      [cevent@hadoop210 bin]$ ll
      
      总用量 4
      
      -rwxrwxrwx. 1 cevent cevent 842 6月 
      28 13:47 xsync
      
       
      
      [cevent@hadoop207 module]$ vim xsync.txt
      
      this is a xsync test!
      
       
      
      [cevent@hadoop207 module]$ xsync xsync.txt 
      
      fname=xsync.txt
      
      pdir=/opt/module
      
      --------------- hadoop207.cevent.com
      ----------------
      
      sending incremental file list
      
       
      
      sent 32 bytes  received 12 bytes  29.33 bytes/sec
      
      total size is 23  speedup is 0.52
      
      --------------- hadoop208.cevent.com
      ----------------
      
      sending incremental file list
      
      xsync.txt
      
       
      
      sent 98 bytes  received 31 bytes  258.00 bytes/sec
      
      total size is 23  speedup is 0.18
      
      --------------- hadoop209.cevent.com
      ----------------
      
      sending incremental file list
      
      xsync.txt
      
       
      
      sent 98 bytes  received 31 bytes  258.00 bytes/sec
      
      total size is 23  speedup is 0.18
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62

      3.12xcall脚本,所有主机同时执行命令

      (1)在/usr/local/bin目录下创建xcall文件,文件内容如下:

      
      
      #!/bin/bash
      
      #$#获取参数个数
      
      #$@ 获取所有参数
      
      pcount=$#
      
      if((pcount==0));then
      
             
      echo no args;
      
             
      exit;
      
      fi
      
       
      
      echo
      -------------localhost.cevent.com----------
      
      $@
      
      for((host=207; host<210; host++)); do
      
             
      echo ----------hadoop$host.cevent.com---------
      
             
      ssh hadoop$host.cevent.com $@
      
      done
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37

      (2)修改脚本 xcall 具有执行权限

      
      
      [cevent@hadoop207 bin]$ sudo chmod a+x
      xcall
      
      [cevent@hadoop207 bin]$ xcall rm -rf
      /opt/module/rsync.txt 
      
      -------------localhost.cevent.com----------
      
      ----------hadoop207.cevent.com---------
      
      ----------hadoop208.cevent.com---------
      
      ----------hadoop209.cevent.com---------
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16

      4.hadoop配置

      4.1core-site.xml配置

      
      
      [cevent@hadoop207 ~]$ cat
      /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml 
      
      <?xml version="1.0"
      encoding="UTF-8"?>
      
      <?xml-stylesheet
      type="text/xsl" href="configuration.xsl"?>
      
      <!--
      
       
      Licensed under the Apache License, Version 2.0 (the
      "License");
      
        you
      may not use this file except in compliance with the License.
      
        You
      may obtain a copy of the License at
      
       
      
         
      http://www.apache.org/licenses/LICENSE-2.0
      
       
      
       
      Unless required by applicable law or agreed to in writing, software
      
       
      distributed under the License is distributed on an "AS IS"
      BASIS,
      
       
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      
        See
      the License for the specific language governing permissions and
      
       
      limitations under the License. See accompanying LICENSE file.
      
      -->
      
       
      
      <!-- Put site-specific property
      overrides in this file. -->
      
       
      
      <configuration>
      
             
      <!-- 指定HDFS中NameNode地址 ,设置的hadoop207.cevent.com=hostname -->
      
             
      <property>
      
                     
      <name>fs.defaultFS</name>
      
                     
      <value>hdfs://hadoop207.cevent.com:8020</value>
      
             
      </property>
      
       
      
             
      <!-- 指定tmp数据存储位置 -->
      
             
      <property>
      
                     
      <name>hadoop.tmp.dir</name>
      
                     
      <value>/opt/module/hadoop-2.7.2/data/tmp</value>
      
             
      </property>
      
       
      
       
      
      </configuration>
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95

      4.2hdfs-site.xml配置

      
      
      [cevent@hadoop207 ~]$ cat
      /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml 
      
      <?xml version="1.0"
      encoding="UTF-8"?>
      
      <?xml-stylesheet
      type="text/xsl" href="configuration.xsl"?>
      
      <!--
      
       
      Licensed under the Apache License, Version 2.0 (the
      "License");
      
        you
      may not use this file except in compliance with the License.
      
        You
      may obtain a copy of the License at
      
       
      
         
      http://www.apache.org/licenses/LICENSE-2.0
      
       
      
       
      Unless required by applicable law or agreed to in writing, software
      
       
      distributed under the License is distributed on an "AS IS"
      BASIS,
      
       
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      
        See
      the License for the specific language governing permissions and
      
       
      limitations under the License. See accompanying LICENSE file.
      
      -->
      
       
      
      <!-- Put site-specific property
      overrides in this file. -->
      
       
      
      <configuration>
      
             
      <!-- 指定HDFS副本的数量:主从复制 -->
      
             
      <property>
      
                     
      <name>dfs.replication</name>
      
                      <value>1</value>
      
             
      </property>
      
       
      
             
      <property>
      
                     
      <name>dfs.namenode.secondary.http-address</name>
      
                     
      <value>hadoop207.cevent.com:50090</value>
      
             
      </property>
      
       
      
       
      
      </configuration>
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91

      4.3slaves配置

      
      
      [cevent@hadoop207 ~]$ vim
      /opt/module/hadoop-2.7.2/etc/hadoop/slaves 
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11

      4.4yarn-site.xml

      
      
      [cevent@hadoop207 ~]$ cat
      /opt/module/hadoop-2.7.2/etc/hadoop/yarn-site.xml 
      
      <?xml version="1.0"?>
      
      <!--
      
       
      Licensed under the Apache License, Version 2.0 (the
      "License");
      
        you
      may not use this file except in compliance with the License.
      
        You
      may obtain a copy of the License at
      
       
      
         
      http://www.apache.org/licenses/LICENSE-2.0
      
       
      
       
      Unless required by applicable law or agreed to in writing, software
      
       
      distributed under the License is distributed on an "AS IS"
      BASIS,
      
       
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      
        See
      the License for the specific language governing permissions and
      
       
      limitations under the License. See accompanying LICENSE file.
      
      -->
      
      <configuration>
      
       
      
             
      <!-- Site specific YARN configuration properties -->
      
       
      
       
      <!-- reducer获取数据的方式 -->
      
       
      <property>
      
         
      <name>yarn.nodemanager.aux-services</name>
      
         
      <value>mapreduce_shuffle</value>
      
       
      </property>
      
       
      
       
      <!-- 指定YARN的ResourceManager的地址 -->
      
       
      <property>
      
         
      <name>yarn.resourcemanager.hostname</name>
      
         
      <value>hadoop207.cevent.com</value>
      
       
      </property>
      
       
      
       
      
      </configuration>
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91

      4.5mapred-site.xml配置

      
      
      [cevent@hadoop207 ~]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/mapred-site.xml
      
      <?xml version="1.0"?>
      
      <?xml-stylesheet
      type="text/xsl" href="configuration.xsl"?>
      
      <!--
      
       
      Licensed under the Apache License, Version 2.0 (the
      "License");
      
        you
      may not use this file except in compliance with the License.
      
        You
      may obtain a copy of the License at
      
       
      
         
      http://www.apache.org/licenses/LICENSE-2.0
      
       
      
       
      Unless required by applicable law or agreed to in writing, software
      
       
      distributed under the License is distributed on an "AS IS"
      BASIS,
      
       
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      
        See
      the License for the specific language governing permissions and
      
       
      limitations under the License. See accompanying LICENSE file.
      
      -->
      
       
      
      <!-- Put site-specific property
      overrides in this file. -->
      
       
      
      <configuration>
      
      <!-- 指定mr运行在yarn上 -->
      
      <property>
      
             
      <name>mapreduce.framework.name</name>
      
              <value>yarn</value>
      
      </property>
      
      
      
       
      
      </configuration>
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72

      4.6同步xsync+xcall测试

      
      
      [cevent@hadoop207 module]$ xsync
      hadoop-2.7.2/
      
      [cevent@hadoop207 module]$ xcall cat
      /opt/module/hadoop-2.7.2/etc/hadoop/slaves 
      
      -------------localhost.cevent.com----------
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      ----------hadoop207.cevent.com---------
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      ----------hadoop208.cevent.com---------
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      ----------hadoop209.cevent.com---------
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40

      4.7【测试无法启动namenode】配置hadoop的mapred-env的jdk

      
      
      [cevent@hadoop210 hadoop]$ vim /opt/module/hadoop-2.7.2/etc/hadoop/mapred-env.sh 
      
      expor
      tHADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
      
      # Licensed to the Apache Software
      Foundation (ASF) under one or more
      
      # contributor license agreements.  See the NOTICE file distributed with
      
      # this work for additional information
      regarding copyright ownership.
      
      # The ASF licenses this file to You under
      the Apache License, Version 2.0
      
      # (the "License"); you may not
      use this file except in compliance with
      
      # the License.  You may obtain a copy of the License at
      
      #
      
      #    
      http://www.apache.org/licenses/LICENSE-2.0
      
      #
      
      # Unless required by applicable law or
      agreed to in writing, software
      
      # distributed under the License is
      distributed on an "AS IS" BASIS,
      
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY
      KIND, either express or implied.
      
      # See the License for the specific language
      governing permissions and
      
      # limitations under the License.
      
       
      
      # export
      JAVA_HOME=/home/y/libexec/jdk1.6.0/
      
      export
      JAVA_HOME=/opt/module/jdk1.7.0_79
      
       
      
      expor
      tHADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
      
       
      
      export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
      
       
      
      #export HADOOP_JOB_HISTORYSERVER_OPTS=
      
      #export HADOOP_MAPRED_LOG_DIR=""
      # Where log files are stored. 
      $HADOOP_MAPRED_HOME/logs by default.
      
      #export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop
      JobSummary logger.
      
      #export HADOOP_MAPRED_PID_DIR= # The pid
      files are stored. /tmp by default.
      
      #export HADOOP_MAPRED_IDENT_STRING= #A
      string representing this instance of hadoop. $USER by default
      
      #export HADOOP_MAPRED_NICENESS= #The
      scheduling priority for daemons. Defaults to 0.
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81

      4.8【测试无法启动dataNode-删除所有data文件夹和logs文件夹数据】

      
      
      [cevent@hadoop210 hadoop-2.7.2]$ rm -rf logs/
      
      [cevent@hadoop210 hadoop-2.7.2]$ bin/hdfs namenode -format 格式化namenode
      
       
      
      [cevent@hadoop210 hadoop-2.7.2]$ sbin/start-dfs.sh
      
      
      Starting namenodes on
      [hadoop210.cevent.com]
      
      hadoop210.cevent.com: starting namenode,
      logging to /opt/module/hadoop-2.7.2/logs/hadoop-cevent-namenode-hadoop210.cevent.com.out
      
      hadoop210.cevent.com: starting datanode,
      logging to
      /opt/module/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop210.cevent.com.out
      
      hadoop211.cevent.com: starting datanode,
      logging to
      /opt/module/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop211.cevent.com.out
      
      hadoop212.cevent.com: starting datanode,
      logging to
      /opt/module/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop212.cevent.com.out
      
      Starting secondary namenodes
      [hadoop210.cevent.com]
      
      hadoop210.cevent.com: starting
      secondarynamenode, logging to
      /opt/module/hadoop-2.7.2/logs/hadoop-cevent-secondarynamenode-hadoop210.cevent.com.out
      
      [cevent@hadoop210 hadoop-2.7.2]$ jps
      
      18471 DataNode
      
      18338 NameNode
      
      18638 SecondaryNameNode
      
      18760 Jps
      
      [cevent@hadoop210 hadoop-2.7.2]$ sbin/start-yarn.sh 
      
      starting yarn daemons
      
      starting resourcemanager, logging to
      /opt/module/hadoop-2.7.2/logs/yarn-cevent-resourcemanager-hadoop210.cevent.com.out
      
      hadoop211.cevent.com: starting nodemanager,
      logging to
      /opt/module/hadoop-2.7.2/logs/yarn-cevent-nodemanager-hadoop211.cevent.com.out
      
      hadoop210.cevent.com: starting nodemanager,
      logging to
      /opt/module/hadoop-2.7.2/logs/yarn-cevent-nodemanager-hadoop210.cevent.com.out
      
      hadoop212.cevent.com: starting nodemanager,
      logging to
      /opt/module/hadoop-2.7.2/logs/yarn-cevent-nodemanager-hadoop212.cevent.com.out
      
      [cevent@hadoop210 hadoop-2.7.2]$ jps
      
      19202 Jps
      
      18471 DataNode
      
      18338 NameNode
      
      18638 SecondaryNameNode
      
      18811 ResourceManager
      
      18919 NodeManager
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77

      4.9修改用户权限

      
      
      
       
        
        [cevent@hadoop210 opt]$ mkdir soft
        mkdir: 无法创建目录"soft": 权限不够
        [cevent@hadoop210 opt]$ sudo vi /etc/sudoers
        ## Next comes the main part: which users
        can run what software on
        ## which machines (the sudoers file can
        be shared between multiple
        ## systems).
        ## Syntax:
        ##
        ##     
        user    MACHINE=COMMANDS
        ##
        ## The COMMANDS section may have other
        options added to it.
        ##
        ## Allow root to run any commands
        anywhere
        root    ALL=(ALL)       ALL
        cevent ALL=(ALL)       ALL
         
        [cevent@hadoop210 opt]$ sudo chown -R cevent:cevent /opt/
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31

      5.zookeeper配置

      5.1准备包

      zookeeper

      5.2解压

      
      
      [cevent@hadoop207 zookeeper-3.4.10]$ tar
      -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/
      
               
      
      [cevent@hadoop207 soft]$ cd /opt/module/
      
      [cevent@hadoop207 zookeeper-3.4.10]$ ll
      
      总用量 1592
      
      drwxr-xr-x. 
      2 cevent cevent    4096 3月 
      23 2017 bin
      
      -rw-rw-r--. 
      1 cevent cevent   84725 3月 
      23 2017 build.xml
      
      drwxr-xr-x. 
      2 cevent cevent    4096 6月 
      16 22:38 conf
      
      drwxr-xr-x. 10 cevent cevent    4096 3月  23 2017 contrib
      
      drwxrwxr-x. 
      3 cevent cevent    4096 6月 
      16 22:34 data
      
      drwxr-xr-x. 
      2 cevent cevent    4096 3月 
      23 2017 dist-maven
      
      drwxr-xr-x. 
      6 cevent cevent    4096 3月 
      23 2017 docs
      
      -rw-rw-r--. 
      1 cevent cevent    1709 3月 
      23 2017 ivysettings.xml
      
      -rw-rw-r--. 
      1 cevent cevent    5691 3月 
      23 2017 ivy.xml
      
      drwxr-xr-x. 
      4 cevent cevent    4096 3月 
      23 2017 lib
      
      -rw-rw-r--. 
      1 cevent cevent   11938 3月 
      23 2017 LICENSE.txt
      
      -rw-rw-r--. 
      1 cevent cevent    3132 3月 
      23 2017 NOTICE.txt
      
      -rw-rw-r--. 
      1 cevent cevent    1770 3月 
      23 2017 README_packaging.txt
      
      -rw-rw-r--. 
      1 cevent cevent    1585 3月 
      23 2017 README.txt
      
      drwxr-xr-x. 
      5 cevent cevent    4096 3月 
      23 2017 recipes
      
      drwxr-xr-x. 
      8 cevent cevent    4096 3月 
      23 2017 src
      
      -rw-rw-r--. 
      1 cevent cevent 1456729 3月  23 2017 zookeeper-3.4.10.jar
      
      -rw-rw-r--. 
      1 cevent cevent     819 3月 
      23 2017 zookeeper-3.4.10.jar.asc
      
      -rw-rw-r--. 
      1 cevent cevent      33 3月 
      23 2017 zookeeper-3.4.10.jar.md5
      
      -rw-rw-r--. 
      1 cevent cevent      41 3月 
      23 2017 zookeeper-3.4.10.jar.sha1
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90

      5.3将/opt/module/zookeeper-3.4.10/conf这个路径下的zoo_sample.cfg修改为zoo.cfg

      
      
      [cevent@hadoop207 zookeeper-3.4.10]$ mv
      conf/zoo_sample.cfg zoo.cfg
      
      [cevent@hadoop207 zookeeper-3.4.10]$ mv zoo.cfg conf/
      • 1
      • 2
      • 3
      • 4
      • 5

      5.4创建zkData

      
      
      [cevent@hadoop207 zookeeper-3.4.10]$ mkdir
      -p data/zkData
      
      /opt/module/zookeeper-3.4.5/data/zkData
      • 1
      • 2
      • 3
      • 4
      • 5

      5.5编辑zoo.cfg

      
      
      [cevent@hadoop207 zookeeper-3.4.10]$ vim
      conf/zoo.cfg 
      
       
      
      # The number of snapshots to retain in
      dataDir
      
      #autopurge.snapRetainCount=3
      
      # Purge task interval in hours
      
      # Set to "0" to disable auto
      purge feature
      
      # The number of milliseconds of each tick
      
      tickTime=2000
      
      # The number of ticks that the initial
      
      # synchronization phase can take
      
      initLimit=10
      
      # The number of ticks that can pass between
      
      # sending a request and getting an
      acknowledgement
      
      syncLimit=5
      
      # the directory where the snapshot is
      stored.
      
      # do not use /tmp for storage, /tmp here is
      just
      
      # example sakes.
      
      dataDir=/opt/module/zookeeper-3.4.10/data/zkData
      
      # the port at which the clients will
      connect
      
      clientPort=2181
      
       
      
      # the maximum number of client connections.
      
      # increase this if you need to handle more
      clients
      
      #maxClientCnxns=60
      
      #
      
      # Be sure to read the maintenance section
      of the
      
      # administrator guide before turning on
      autopurge.
      
      #
      
      #
      http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
      
      #
      
      # The number of snapshots to retain in
      dataDir
      
      #autopurge.snapRetainCount=3
      
      # Purge task interval in hours
      
      # Set to "0" to disable auto
      purge feature
      
      #autopurge.purgeInterval=1
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85

      5.6启动zookeeper服务器和客户端(需先编辑myid)

      
      
      【执行同步】
      
      [cevent@hadoop210 module]$ xsync zookeeper-3.4.10/
      
       
      
      [cevent@hadoop207/208/209分别配置相应id zookeeper-3.4.10]$ vim
      data/zkData/myid
      
      207
      
      208
      
      209
      
      [cevent@hadoop207 zookeeper-3.4.10]$
      bin/zkServer.sh start
      
      ZooKeeper JMX enabled by default
      
      Using config:
      /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
      
      Starting zookeeper ... STARTED
      
      [cevent@hadoop207 zookeeper-3.4.10]$ jps
      
      6134 QuorumPeerMain
      
      6157 Jps
      
      [cevent@hadoop207 zookeeper-3.4.10]$
      bin/zkCli.sh 
      
      Connecting to localhost:2181
      
       
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38

      5.7Cli.sh启动失败解决

      
      
      【报错】
      
      [cevent@hadoop207 zookeeper-3.4.10]$
      bin/zkServer.sh status
      
      ZooKeeper JMX enabled by default
      
      Using config:
      /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
      
      Error contacting service. It is probably
      not running.
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15

      【解决】

      
      
      |-关闭防火墙
      
      暂时关闭防火墙:(立即生效,开机重启,会重新打开) 
      
      service iptables stop 
      
      永久关闭防火墙(关机重启才会生效)
      
      
      chkconfig iptables off
      
      重新解压zookeeper,在data/zkData下新建zookeeper_server.pid=进程ID
      
       
      
      [cevent@hadoop207 zookeeper-3.4.10]$ jps
      
      5449 QuorumPeerMain
      
      5763 Jps
      
      [cevent@hadoop207 zookeeper-3.4.10]$ cd
      data/zkData/
      
      [cevent@hadoop207 zkData]$ ll
      
      总用量 4
      
      drwxrwxr-x. 2 cevent cevent 4096 6月 
      17 11:54 version-2
      
      [cevent@hadoop207 zkData]$ vim
      zookeeper_server.pid
      
      5449
      
      【启动成功】
      
      [cevent@hadoop207 zookeeper-3.4.10]$
      bin/zkServer.sh status
      
      ZooKeeper JMX enabled by default
      
      Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
      
      Mode: standalone
      
       
      
      [zk: localhost:2181(CONNECTED) 1] ll
      
      ZooKeeper -server host:port cmd args
      
             
      connect host:port
      
             
      get path [watch]
      
             
      ls path [watch]
      
             
      set path data [version]
      
             
      rmr path
      
             
      delquota [-n|-b] path
      
             
      quit 
      
             
      printwatches on|off
      
             
      create [-s] [-e] path data acl
      
             
      stat path [watch]
      
             
      close 
      
             
      ls2 path [watch]
      
             
      history 
      
             
      listquota path
      
             
      setAcl path acl
      
             
      getAcl path
      
             
      sync path
      
             
      redo cmdno
      
             
      addauth scheme auth
      
             
      delete path [version]
      
             
      setquota -n|-b val path
      
      [zk: localhost:2181(CONNECTED) 2] quit
      
      Quitting...
      
      2020-06-17 12:00:36,210 [myid:] - INFO  [main:ZooKeeper@684] - Session:
      0x172c06c50490000 closed
      
      2020-06-17 12:00:36,211 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@519]
      - EventThread shut down for session: 0x172c06c50490000
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128

      5.8配置zookeeper集群zoo.cfg

      
      
      [cevent@hadoop207 zookeeper-3.4.10]$ vim
      conf/zoo.cfg 
      
      __>最后一行添加如下:
      
       
      
      ##################cluster#################
      
      server.207=hadoop207.cevent.com:2888:3888
      
      server.208=hadoop208.cevent.com:2888:3888
      
      server.209=hadoop209.cevent.com:2888:3888
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17

      【分发】

      
      
      [cevent@hadoop210 zookeeper-3.4.10]$ xsync conf/zoo.cfg 
      
      fname=zoo.cfg
      
      pdir=/opt/module/zookeeper-3.4.10/conf
      
      --------------- hadoop210.cevent.com ----------------
      
      sending incremental file list
      
      sent 30 bytes  received 12
      bytes  84.00 bytes/sec
      
      total size is 1118  speedup is
      26.62
      
      --------------- hadoop211.cevent.com ----------------
      
      sending incremental file list
      
      zoo.cfg
      
      sent 495 bytes  received 43
      bytes  358.67 bytes/sec
      
      total size is 1118  speedup is
      2.08
      
      --------------- hadoop212.cevent.com ----------------
      
      sending incremental file list
      
      zoo.cfg
      
      sent 495 bytes  received 43
      bytes  1076.00 bytes/sec
      
      total size is 1118 
      speedup is 2.08
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40

      5.9启动hadoop:dfs和yarn(不启动hadoop也可以实现,最少启动2个zk)

      
      
      [cevent@hadoop207 hadoop-2.7.2]$
      sbin/start-dfs.sh 
      
      [cevent@hadoop207 hadoop-2.7.2]$ sbin/start-yarn.sh
      
      
      
      依次启动zookeeper,全部启动查看zkServer.sh status
      
      [cevent@hadoop207 zookeeper-3.4.10]$
      bin/zkServer.sh status
      
      ZooKeeper JMX enabled by default
      
      Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
      
      Mode: follower
      
      [cevent@hadoop208 zookeeper-3.4.10]$
      bin/zkServer.sh status
      
      ZooKeeper JMX enabled by default
      
      Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
      
      Mode: leader
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29

      6.hive配置

      6.1准备包

      hive

      6.2解压

      
      
      [cevent@hadoop210 soft]$ tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /opt/module/
      
      [cevent@hadoop210 module]$ mv apache-hive-1.2.1-bin/ hive-1.2.1
      
      [cevent@hadoop210 module]$ ll
      
      总用量 16
      
      drwxr-xr-x. 12 cevent cevent 4096 6月 
      28 15:21 hadoop-2.7.2
      
      drwxrwxr-x. 
      8 cevent cevent 4096 6月  28 16:23 hive-1.2.1
      
      drwxr-xr-x. 
      8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
      
      drwxr-xr-x. 11 cevent cevent 4096 6月 
      28 15:50 zookeeper-3.4.10
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22

      6.3配置/etc/profile

      
      
      
       
        
        [cevent@hadoop210 module]$ vim /etc/profile
         
        #JAVA_HOME
        export JAVA_HOME=/opt/module/jdk1.7.0_79
        export PATH=$PATH:$JAVA_HOME/bin
         
        #HADOOP_HOME
        export
        HADOOP_HOME=/opt/module/hadoop-2.7.2
        export PATH=$PATH:$HADOOP_HOME/bin
        export PATH=$PATH:$HADOOP_HOME/sbin
         
        #ZOOKEEPER_HOME
        export
        ZOOKEEPER_HOME=/opt/module/zookeeper-3.4.10
        export PATH=$PATH:$ZOOKEEPER_HOME/bin
         
        #HIVE_HOME
        export HIVE_HOME=/opt/module/hive-1.2.1
        export PATH=$PATH:$HIVE_HOME/bin
        ~
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30

      6.4配置hive-env.sh

      
      
      
       
        
        # Set HADOOP_HOME to point to a specific
        hadoop install directory
        # HADOOP_HOME=${bin}/../../hadoop
        HADOOP_HOME=/opt/module/hadoop-2.7.2
         
        # Hive Configuration Directory can be
        controlled by:
        # export HIVE_CONF_DIR=
        export HIVE_CONF_DIR=/opt/module/hive-1.2.1/conf
         
        # Folder containing extra ibraries
        required for hive compilation/execution can be controlled by:
        # export HIVE_AUX_JARS_PATH=
        
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19

      6.5启动dfs、yarn、zk

      
      
      [cevent@hadoop210 hadoop-2.7.2]$ jps
      
      24482 SecondaryNameNode
      
      24646 ResourceManager
      
      24761 NodeManager
      
      25108 QuorumPeerMain
      
      25142 Jps
      
      24303 DataNode
      
      24163 NameNode
      
       
      
      [cevent@hadoop210 hive-1.2.1]$ hive
      
       
      
      Logging initialized using configuration in
      jar:file:/opt/module/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
      
      hive> create
      database cevent01;
      
      OK
      
      Time taken: 0.954 seconds
      
      hive> use
      cevent01;
      
      OK
      
      Time taken: 0.062 seconds
      
      hive> create
      table c_student(id int,name string);
      
      OK
      
      Time taken: 0.567 seconds
      
      hive> show
      tables;
      
      OK
      
      c_student
      
      Time taken: 0.194 seconds, Fetched: 1
      row(s)
      
      hive> 
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60

      7.安装mysql

      7.1安装前检查默认mysql依赖

      
      
      [cevent@hadoop207 soft]$ rpm -qa | grep mysql  查询默认安装的mysql依赖
      
      mysql-libs-5.1.73-7.el6.x86_64
      
       
      
      [cevent@hadoop207 soft]$ rpm -qa | grep mysql |
      xargs sudo rpm -e --nodeps  直接删除-e 强制卸载--nodeps(忽略依赖关系)
      
                                                Xargs标准输入流 作为参数 传送到管道 | grep 进行rpm删除
      
      [sudo] password for cevent:
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15

      7.2安装mysql服务端

      
      
      [cevent@hadoop207 soft]$ rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm  安装mySQL的服务端server
      
      error: can't create transaction lock on
      /var/lib/rpm/.rpm.lock (权限不够)
      
      [cevent@hadoop207 soft]$ sudo rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm
      
       
      
      2020-05-07 13:17:14 5020 [Note] InnoDB:
      Shutdown completed; log sequence number 1625987
      
       
      
      A RANDOM PASSWORD HAS BEEN SET FOR THE MySQL root USER !   生成了一个随机密码在root下
      
      You will find that password in '/root/.mysql_secret'.
      
       
      
      You must change that password on your first
      connect,
      
      no other statement but 'SET PASSWORD' will be
      accepted.
      
      See the manual for the semantics of the
      'password expired' flag.
      
       
      
      Also, the account for the anonymous user
      has been removed.
      
       
      
      In addition, you can run:
      
       
      
        /usr/bin/mysql_secure_installation  重新设置密码
      
       
      
      which will also give you the option of
      removing the test database.
      
      This is strongly recommended for production
      servers.
      
       
      
      See the manual for more instructions.
      
       
      
      Please report any problems at
      http://bugs.mysql.com/
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61

      7.3查看密码

      
      
      [cevent@hadoop210 soft]$ sudo cat /root/.mysql_secret
      
      # The random password set for the root user
      at Sun Jun 28 16:43:41 2020 (local time): zaSF9FWPlTD1C6Mv
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7

      7.4安装mysql客户端

      
      
      [cevent@hadoop207 soft]$ sudo rpm -ivh MySQL-client-5.6.24-1.el6.x86_64.rpm 安装mySQL客户端
      
      Preparing...               
      ########################################### [100%]
      
        
      1:MySQL-client          
      ########################################### [100%]
      
       
      
      [cevent@hadoop207 soft]$ sudo service mysql start 启动mySQL服务
      
      Starting MySQL.......                                      [确定]
      
       
      
      [cevent@hadoop207 soft]$ mysql_secure_installation 重新设置密码
      
      Enter current password for root (enter for
      none): 
      
      ERROR 1045 (28000): Access denied for user
      'root'@'localhost' (using password: YES)
      
      Enter current password for root (enter for none):  输入random密码
      
      OK, successfully used password, moving
      on...
      
       
      
      Setting the root password ensures that
      nobody can log into the MySQL
      
      root user without the proper authorisation.
      
       
      
      You already have a root password set, so
      you can safely answer 'n'.
      
       
      
      Change the root password? [Y/n] y
      
      New password: 
      
      Re-enter new password: 
      
      Password updated successfully!
      
      Reloading privilege tables..
      
       ...
      Success!
      
       
      
       
      
      By default, a MySQL installation has an
      anonymous user, allowing anyone
      
      to log into MySQL without having to have a
      user account created for
      
      them. 
      This is intended only for testing, and to make the installation
      
      go a bit smoother.  You should remove them before moving into a
      
      production environment.
      
       
      
      Remove anonymous users? [Y/n] y
      
       ...
      Success!
      
       
      
      Normally, root should only be allowed to
      connect from 'localhost'.  This
      
      ensures that someone cannot guess at the
      root password from the network.
      
       
      
      Disallow root login remotely? [Y/n] y
      
       ...
      Success!
      
       
      
      By default, MySQL comes with a database
      named 'test' that anyone can
      
      access. 
      This is also intended only for testing, and should be removed
      
      before moving into a production
      environment.
      
       
      
      Remove test database and access to it? [Y/n] t
      
       -
      Dropping test database...
      
       ...
      Success!
      
       -
      Removing privileges on test database...
      
       ...
      Success!
      
       
      
      Reloading the privilege tables will ensure
      that all changes made so far
      
      will take effect immediately.
      
       
      
      Reload privilege tables now? [Y/n] y
      
       ...
      Success!
      
       
      
       
      
       
      
       
      
      All done! 
      If you've completed all of the above steps, your MySQL
      
      installation should now be secure.
      
       
      
      Thanks for using MySQL!
      
       
      
       
      
      Cleaning up...
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162

      7.5修改数据库localhost默认配置

      
      
      [cevent@hadoop207 soft]$ mysql -uroot -pcevent  
      进入mysql修改访问权限
      
      Warning: Using a password on the command
      line interface can be insecure.
      
      Welcome to the MySQL monitor.  Commands end with ; or \g.
      
      Your MySQL connection id is 13
      
      Server version: 5.6.24 MySQL Community
      Server (GPL)
      
       
      
      Copyright (c) 2000, 2015, Oracle and/or its
      affiliates. All rights reserved.
      
       
      
      Oracle is a registered trademark of Oracle
      Corporation and/or its
      
      affiliates. Other names may be trademarks
      of their respective
      
      owners.
      
       
      
      Type 'help;' or '\h' for help. Type '\c' to
      clear the current input statement.
      
       
      
      mysql> show
      databases;  显示数据库
      
      ERROR 1064 (42000): You have an error in
      your SQL syntax; check the manual that corresponds to your MySQL server version
      for the right syntax to use near 'database' at line 1
      
      mysql> show
      databases;
      
      +--------------------+
      
      | Database           |
      
      +--------------------+
      
      | information_schema |
      
      | mysql              |
      
      | performance_schema |
      
      +--------------------+
      
      3 rows in set (0.00 sec)
      
       
      
      mysql> use mysql
      
      Reading table information for completion of
      table and column names
      
      You can turn off this feature to get a
      quicker startup with -A
      
       
      
      Database changed
      
      mysql> show tables;
      
      +---------------------------+
      
      | Tables_in_mysql           |
      
      +---------------------------+
      
      | columns_priv              |
      
      | db                        |
      
      | event                     |
      
      | func                      |
      
      | general_log               |
      
      | help_category             |
      
      | help_keyword              |
      
      | help_relation             |
      
      | help_topic                |
      
      | innodb_index_stats        |
      
      | innodb_table_stats        |
      
      | ndb_binlog_index          |
      
      | plugin                    |
      
      | proc                      |
      
      | procs_priv                |
      
      | proxies_priv              |
      
      | servers                   |
      
      | slave_master_info         |
      
      | slave_relay_log_info      |
      
      | slave_worker_info         |
      
      | slow_log                  |
      
      | tables_priv               |
      
      | time_zone                 |
      
      | time_zone_leap_second     |
      
      | time_zone_name            |
      
      | time_zone_transition      |
      
      | time_zone_transition_type |
      
      | user                      |
      
      +---------------------------+
      
      28 rows in set (0.00 sec)
      
       
      
      mysql> select user,host,password from
      user;
      
      +------+-----------+-------------------------------------------+
      
      | user | host      | password                                  |
      
      +------+-----------+-------------------------------------------+
      
      | root | localhost |
      *076AB7BA096BCDF76A8F0BD7987FFFCCD05C05F7 |
      
      | root | 127.0.0.1 |
      *076AB7BA096BCDF76A8F0BD7987FFFCCD05C05F7 |
      
      | root | ::1       |
      *076AB7BA096BCDF76A8F0BD7987FFFCCD05C05F7 |
      
      +------+-----------+-------------------------------------------+
      
      3 rows in set (0.00 sec)
      
       
      
      mysql> delete from user where
      host="127.0.0.1" or host="::1";
      
      Query OK, 2 rows affected (0.00 sec)
      
       
      
      mysql> select user,root,password from
      user;
      
      ERROR 1054 (42S22): Unknown column 'root'
      in 'field list'
      
      mysql> select user,host,password from
      user;
      
      +------+-----------+-------------------------------------------+
      
      | user | host      | password                                  |
      
      +------+-----------+-------------------------------------------+
      
      | root | localhost |
      *076AB7BA096BCDF76A8F0BD7987FFFCCD05C05F7 |
      
      +------+-----------+-------------------------------------------+
      
      1 row in set (0.00 sec)
      
       
      
      mysql> update user set
      host="%";
      
      Query OK, 1 row affected (0.00 sec)
      
      Rows matched: 1  Changed: 1 
      Warnings: 0
      
       
      
      mysql> select
      user,host,password from user;  查询当前用户(%可远程访问)
      
      +------+------+-------------------------------------------+
      
      | user | host | password                                  |
      
      +------+------+-------------------------------------------+
      
      | root | %    | *076AB7BA096BCDF76A8F0BD7987FFFCCD05C05F7
      |
      
      +------+------+-------------------------------------------+
      
      1 row in set (0.00 sec)
      
       
      
      mysql> flush
      privileges;  刷新已配置的权限
      
      Query OK, 0 rows affected (0.00 sec)
      
       
      
      mysql> quit;  退出
      
      Bye
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181
      • 182
      • 183
      • 184
      • 185
      • 186
      • 187
      • 188
      • 189
      • 190
      • 191
      • 192
      • 193
      • 194
      • 195
      • 196
      • 197
      • 198
      • 199
      • 200
      • 201
      • 202
      • 203
      • 204
      • 205
      • 206
      • 207
      • 208
      • 209
      • 210
      • 211
      • 212
      • 213
      • 214
      • 215
      • 216
      • 217
      • 218
      • 219
      • 220
      • 221
      • 222
      • 223
      • 224
      • 225
      • 226
      • 227
      • 228
      • 229
      • 230
      • 231
      • 232
      • 233
      • 234
      • 235
      • 236
      • 237
      • 238
      • 239
      • 240
      • 241

      8.Hive添加数据库访问

      8.1新建xml文件

      
      
      [cevent@hadoop210 soft]$ cd
      /opt/module/hive-1.2.1/
      
      [cevent@hadoop210 hive-1.2.1]$ cd conf/
      
      [cevent@hadoop210 conf]$ vim hive-site.xml
      
       
      
      <?xml version="1.0"?>
      
      <?xml-stylesheet
      type="text/xsl" href="configuration.xsl"?>
      
       
      
      <configuration>
      
       
      
             
      <!--mySQL数据库地址-->
      
             
      <property>
      
               
      <name>javax.jdo.option.ConnectionURL</name>
      
               
      <value>jdbc:mysql://hadoop210.cevent.com:3306/metastore?createDatabaseIfNotExist=true</value>
      
               
      <description>JDBC connect string for a JDBC
      metastore</description>
      
             
      </property>
      
       
      
             
      <property>
      
               
      <name>javax.jdo.option.ConnectionDriverName</name>
      
               
      <value>com.mysql.jdbc.Driver</value>
      
               
      <description>Driver class name for a JDBC
      metastore</description>
      
             
      </property>
      
       
      
             
      <!--mySQL数据库访问用户名及密码-->
      
       
      
             
      <property>
      
               
      <name>javax.jdo.option.ConnectionUserName</name>
      
               
      <value>root</value>
      
               
      <description>username to use against metastore
      database</description>
      
             
      </property>
      
       
      
             
      <property>
      
               
      <name>javax.jdo.option.ConnectionPassword</name>
      
               
      <value>cevent</value>
      
               
      <description>password to use against metastore
      database</description>
      
             
      </property>
      
      </configuration>
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102

      8.2复制connector.jar

      
      
      [cevent@hadoop207 conf]$ cd /opt/soft/
      
      [cevent@hadoop207 soft]$ ll
      
      总用量 507012
      
      -rw-r--r--. 1 cevent cevent  92834839 4月  30 15:53
      apache-hive-1.2.1-bin.tar.gz
      
      -rw-r--r--. 1 cevent cevent 197657687 4月 
      30 13:23 hadoop-2.7.2.tar.gz
      
      -rw-r--r--. 1 cevent cevent 153512879 4月 
      30 13:21 jdk-7u79-linux-x64.gz
      
      -rw-r--r--. 1 cevent cevent  18509960 4月  30 15:56
      MySQL-client-5.6.24-1.el6.x86_64.rpm
      
      -rw-r--r--. 1 cevent cevent    872303 4月  30 15:56
      mysql-connector-java-5.1.27-bin.jar
      
      -rw-r--r--. 1 cevent cevent  55782196 4月  30 15:56
      MySQL-server-5.6.24-1.el6.x86_64.rpm
      
      [cevent@hadoop207 soft]$ cp mysql-connector-java-5.1.27-bin.jar
      /opt/module/hive-1.2.1/lib/  将connector拷贝到hive/lib下
      
       
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31

      8.3启动hive测试

      
      
      [cevent@hadoop207 opt]$ cd module/hive-1.2.1/
      
      [cevent@hadoop207 hive-1.2.1]$ ll
      
      总用量 504
      
      drwxrwxr-x. 3 cevent cevent   4096 4月  30 15:59 bin
      
      drwxrwxr-x. 2 cevent cevent   4096 5月   7 13:49 conf
      
      -rw-rw-r--. 1 cevent cevent  21062 4月  30 16:44 derby.log
      
      drwxrwxr-x. 4 cevent cevent   4096 4月  30 15:59 examples
      
      drwxrwxr-x. 7 cevent cevent   4096 4月  30 15:59 hcatalog
      
      drwxrwxr-x. 4 cevent cevent   4096 5月   7 13:51 lib
      
      -rw-rw-r--. 1 cevent cevent  24754 4月  30 2015 LICENSE
      
      drwxrwxr-x. 5 cevent cevent   4096 4月  30 16:44 metastore_db
      
      -rw-rw-r--. 1 cevent cevent    397 6月  19 2015 NOTICE
      
      -rw-rw-r--. 1 cevent cevent   4366 6月  19 2015 README.txt
      
      -rw-rw-r--. 1 cevent cevent 421129 6月 
      19 2015 RELEASE_NOTES.txt
      
      drwxrwxr-x. 3 cevent cevent   4096 4月  30 15:59 scripts
      
      [cevent@hadoop207 hive-1.2.1]$ hive
      
       
      
      Logging initialized using configuration in
      jar:file:/opt/module/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
      
      hive> create
      database cevent01;  创建数据库
      
      OK
      
      Time taken: 1.22 seconds
      
      hive> use
      cevent01;  进入数据库
      
      OK
      
      Time taken: 0.048 seconds
      
      hive> create
      table student(id int,name string);  创建表
      
      OK
      
      Time taken: 1.212 seconds
      
      hive> insert into ta
      
      table         tables        tablesample   tan(         
      
      
      hive> insert
      into table student values(2020,"cevent");  插入表
      
      Query ID =
      cevent_20200507135624_60770fe8-ce63-4514-be6f-960f1fafae70
      
      Total jobs = 3
      
      Launching Job 1 out of 3
      
      Number of reduce tasks is set to 0 since
      there's no reduce operator
      
      Starting Job = job_1588830813660_0001,
      Tracking URL = http://hadoop207.cevent.com:8088/proxy/application_1588830813660_0001/
      
      Kill Command =
      /opt/module/hadoop-2.7.2/bin/hadoop job 
      -kill job_1588830813660_0001
      
      Hadoop job information for Stage-1: number
      of mappers: 1; number of reducers: 0
      
      2020-05-07 13:56:47,121 Stage-1 map =
      0%,  reduce = 0%
      
      2020-05-07 13:56:57,747 Stage-1 map =
      100%,  reduce = 0%, Cumulative CPU 2.0
      sec
      
      MapReduce Total cumulative CPU time: 2 seconds
      0 msec
      
      Ended Job = job_1588830813660_0001
      
      Stage-4 is selected by condition resolver.
      
      Stage-3 is filtered out by condition
      resolver.
      
      Stage-5 is filtered out by condition
      resolver.
      
      Moving data to:
      hdfs://hadoop207.cevent.com:8020/user/hive/warehouse/cevent01.db/student/.hive-staging_hive_2020-05-07_13-56-24_509_3865710950866541111-1/-ext-10000
      
      Loading data to table cevent01.student
      
      Table cevent01.student stats: [numFiles=1,
      numRows=1, totalSize=12, rawDataSize=11]
      
      MapReduce Jobs Launched: 
      
      Stage-Stage-1: Map: 1   Cumulative CPU: 2.0 sec   HDFS Read: 3675 HDFS Write: 84 SUCCESS
      
      Total MapReduce CPU Time Spent: 2 seconds 0
      msec
      
      OK
      
      Time taken: 35.948 seconds
      
      hive> select
      * from student; 查询表
      
      OK
      
      2020   
      cevent
      
      Time taken: 0.24 seconds, Fetched: 1 row(s)
      
      hive> drop
      table student;  删除表
      
      OK
      
      hive> drop
      database cevent01; 删除数据库
      
      OK
      
      Time taken: 0.214 seconds
      
       
      
      hive> [cevent@hadoop210 hive-1.2.1]$ ll
      
      总用量 504
      
      drwxrwxr-x. 3 cevent cevent   4096 6月  28 16:23 bin
      
      drwxrwxr-x. 2 cevent cevent   4096 6月  28 16:56 conf
      
      -rw-rw-r--. 1 cevent cevent  21061 6月  28 16:35 derby.log
      
      drwxrwxr-x. 4 cevent cevent   4096 6月  28 16:23 examples
      
      drwxrwxr-x. 7 cevent cevent   4096 6月  28 16:23 hcatalog
      
      drwxrwxr-x. 4 cevent cevent   4096 6月  28 16:58 lib
      
      -rw-rw-r--. 1 cevent cevent  24754 4月  30 2015 LICENSE
      
      drwxrwxr-x. 5 cevent cevent   4096 6月  28 16:35 metastore_db
      
      -rw-rw-r--. 1 cevent cevent    397 6月  19 2015 NOTICE
      
      -rw-rw-r--. 1 cevent cevent   4366 6月  19 2015 README.txt
      
      -rw-rw-r--. 1 cevent cevent 421129 6月 
      19 2015 RELEASE_NOTES.txt
      
      drwxrwxr-x. 3 cevent cevent   4096 6月  28 16:23 scripts
      
      [cevent@hadoop210 hive-1.2.1]$ bin/hiveserver2 
      
      OK
      
       
      
      [cevent@hadoop210 hive-1.2.1]$ bin/beeline 
      
      Beeline version 1.2.1 by Apache Hive
      
      beeline> !connect
      jdbc:hive2://hadoop210.cevent.com:10000
      
      Connecting to
      jdbc:hive2://hadoop210.cevent.com:10000
      
      Enter username for
      jdbc:hive2://hadoop210.cevent.com:10000: cevent
      
      Enter password for
      jdbc:hive2://hadoop210.cevent.com:10000: ******
      
      Connected to: Apache Hive (version 1.2.1)
      
      Driver: Hive JDBC (version 1.2.1)
      
      Transaction isolation:
      TRANSACTION_REPEATABLE_READ
      
      0:
      jdbc:hive2://hadoop210.cevent.com:10000> show databases;
      
      +----------------+--+
      
      | database_name  |
      
      +----------------+--+
      
      | default        |
      
      +----------------+--+
      
      1 row selected (1.524 seconds)
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181
      • 182
      • 183
      • 184
      • 185
      • 186
      • 187
      • 188
      • 189
      • 190
      • 191
      • 192
      • 193
      • 194
      • 195
      • 196
      • 197
      • 198
      • 199
      • 200
      • 201
      • 202
      • 203
      • 204
      • 205
      • 206
      • 207
      • 208
      • 209
      • 210
      • 211
      • 212
      • 213
      • 214
      • 215
      • 216
      • 217
      • 218
      • 219
      • 220
      • 221
      • 222
      • 223
      • 224
      • 225

      9.Flume日志采集集群

      9.1准备包

      flumeflume

      9.2解压并配置flume

      
      
      
       
        
        [cevent@hadoop210 conf]$ tar -zxvf apache-flume-1.7.0-bin.tar.gz -C /opt/module/
         
        [cevent@hadoop210 soft]$ cd /opt/module/
        [cevent@hadoop210 module]$ ll
        总用量 20
        drwxrwxr-x.  7 cevent cevent 4096 6月  28 17:24 apache-flume-1.7.0-bin
        drwxr-xr-x. 12 cevent cevent 4096 6月  28 15:21 hadoop-2.7.2
        drwxrwxr-x.  9 cevent cevent 4096 6月  28 16:35 hive-1.2.1
        drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
        drwxr-xr-x. 11 cevent cevent 4096 6月  28 15:50 zookeeper-3.4.10
        [cevent@hadoop210 module]$ mv apache-flume-1.7.0-bin/ flume-1.7.0
        [cevent@hadoop210 module]$ ll
        总用量 20
        drwxrwxr-x.  7 cevent cevent 4096 6月  28 17:24 flume-1.7.0
        drwxr-xr-x. 12 cevent cevent 4096 6月  28 15:21 hadoop-2.7.2
        drwxrwxr-x.  9 cevent cevent 4096 6月  28 16:35 hive-1.2.1
        drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
        drwxr-xr-x. 11 cevent cevent 4096 6月  28 15:50 zookeeper-3.4.10
        [cevent@hadoop210 module]$ cd flume-1.7.0/conf/
        [cevent@hadoop210 conf]$ ll
        总用量 16
        -rw-r--r--. 1 cevent cevent 1661 9月  26 2016 flume-conf.properties.template
        -rw-r--r--. 1 cevent cevent 1455 9月  26 2016 flume-env.ps1.template
        -rw-r--r--. 1 cevent cevent 1565 9月  26 2016 flume-env.sh.template
        -rw-r--r--. 1 cevent cevent 3107 9月  26 2016 log4j.properties
        [cevent@hadoop210 conf]$ mv flume-env.sh.template flume-env.sh
        [cevent@hadoop210 conf]$ vim flume-env.sh 
        # Licensed to the Apache Software
        Foundation (ASF) under one
        # or more contributor license
        agreements.  See the NOTICE file
        # distributed with this work for
        additional information
        # regarding copyright ownership.  The ASF licenses this file
        # to you under the Apache License,
        Version 2.0 (the
        # "License"); you may not use
        this file except in compliance
        # with the License.  You may obtain a copy of the License at
        #
        #    
        http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or
        agreed to in writing, software
        # distributed under the License is
        distributed on an "AS IS" BASIS,
        # WITHOUT WARRANTIES OR CONDITIONS OF ANY
        KIND, either express or implied.
        # See the License for the specific
        language governing permissions and
        # limitations under the License.
         
        # If this file is placed at FLUME_CONF_DIR/flume-env.sh,
        it will be sourced
        # during Flume startup.
         
        # Enviroment variables can be set here.
         
        # export
        JAVA_HOME=/usr/lib/jvm/java-6-sun
        export JAVA_HOME=/opt/module/jdk1.7.0_79
         
        # Give Flume more memory and
        pre-allocate, enable remote monitoring via JMX
        # export JAVA_OPTS="-Xms100m
        -Xmx2000m -Dcom.sun.management.jmxremote"
         
        # Let Flume write raw event data and
        configuration information to its log files for debugging
        # purposes. Enabling these flags is not
        recommended in production,
        # as it may result in logging sensitive
        user information or encryption secrets.
        # export JAVA_OPTS="$JAVA_OPTS
        -Dorg.apache.flume.log.rawdata=true -Dorg.apache.flume.log.printconfig=true
        "
         
        # Note that the Flume conf directory is
        always included in the classpath.
        #FLUME_CLASSPATH=""
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91

      9.3Nc插件

      
      
      
       
        
        [cevent@hadoop210 yum.repos.d]$ sudo vi CentOS-Base.repo
        [sudo] password for cevent: 
         
        #contrib - packages by Centos Users
        [contrib]
        name=CentOS-$releasever - Contrib -
        163.com
        baseurl=http://mirrors.163.com/centos/$releasever/contrib/$basearch/
        #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
        gpgcheck=1
        enabled=0
        # CentOS-Base.repo
        #
        # The mirror system uses the connecting
        IP address of the client and the
        # update status of each mirror to pick
        mirrors that are updated to and
        # geographically close to the
        client.  You should use this for CentOS
        updates
        # unless you are manually picking other
        mirrors.
        #
        # If the mirrorlist= does not work for
        you, as a fall back you can try the
        # remarked out baseurl= line instead.
        #
        #
        [base]
        name=CentOS-$releasever - Base - 163.com
        baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/
        #mirrorlist=file:///mnt/cdrom
        gpgcheck=1
        enable=1
        gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
         
        #released updates
        [updates]
        name=CentOS-$releasever - Updates - 163.com
        baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/
        #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
        gpgcheck=1
        gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
         
        #additional packages that may be useful
        [extras]
        name=CentOS-$releasever - Extras - 163.com
        baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/
        #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
        gpgcheck=1
        gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
         
        #additional packages that extend functionality of existing packages
        [centosplus]
        name=CentOS-$releasever - Plus - 163.com
        baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/
        #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
        gpgcheck=1
        enabled=0
        gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
         
        #contrib - packages by Centos Users
        [contrib]
        name=CentOS-$releasever - Contrib - 163.com
        baseurl=http://mirrors.163.com/centos/$releasever/contrib/$basearch/
        #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
        gpgcheck=1
        enabled=0
        gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
         
        [cevent@hadoop210 yum.repos.d]$ sudo yum install -y nc
        已加载插件:fastestmirror, refresh-packagekit, security
        设置安装进程
        Loading mirror speeds from cached
        hostfile
        base                                                              
        | 3.7 kB     00:00     
        base/primary_db                                                   
        | 4.7 MB     00:07     
        extras                                                            
        | 3.4 kB     00:00     
        updates                                               
                    | 3.4 kB     00:00    
        
        解决依赖关系
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94

      9.4编写flume-conf配置文件,启动文件监控

      
      
      
       
        
        [cevent@hadoop207 ~]$
        cd /opt/module/apache-flume-1.7.0/
        [cevent@hadoop207 apache-flume-1.7.0]$ ll
        总用量 148
        drwxr-xr-x. 
        2 cevent cevent  4096 6月  11
        13:35 bin
        -rw-r--r--. 
        1 cevent cevent 77387 10月 11 2016
        CHANGELOG
        drwxr-xr-x. 
        2 cevent cevent  4096 6月  11
        13:42 conf
        -rw-r--r--. 
        1 cevent cevent  6172 9月  26
        2016 DEVNOTES
        -rw-r--r--. 
        1 cevent cevent  2873 9月  26
        2016 doap_Flume.rdf
        drwxr-xr-x. 10 cevent cevent  4096 10月 13 2016 docs
        drwxrwxr-x. 
        2 cevent cevent  4096 6月  11
        13:35 lib
        -rw-r--r--. 
        1 cevent cevent 27625 10月 13 2016
        LICENSE
        -rw-r--r--. 
        1 cevent cevent   249 9月  26
        2016 NOTICE
        -rw-r--r--. 
        1 cevent cevent  2520 9月  26
        2016 README.md
        -rw-r--r--. 
        1 cevent cevent  1585 10月 11 2016 RELEASE-NOTES
        drwxrwxr-x. 
        2 cevent cevent  4096 6月  11
        13:35 tools
        [cevent@hadoop207 apache-flume-1.7.0]$ mkdir job
        [cevent@hadoop207 apache-flume-1.7.0]$ ll
        总用量 152
        drwxr-xr-x. 
        2 cevent cevent  4096 6月  11
        13:35 bin
        -rw-r--r--. 
        1 cevent cevent 77387 10月 11 2016
        CHANGELOG
        drwxr-xr-x. 
        2 cevent cevent  4096 6月  11
        13:42 conf
        -rw-r--r--. 
        1 cevent cevent  6172 9月  26
        2016 DEVNOTES
        -rw-r--r--. 
        1 cevent cevent  2873 9月  26
        2016 doap_Flume.rdf
        drwxr-xr-x. 10 cevent cevent  4096 10月 13 2016 docs
        drwxrwxr-x. 
        2 cevent cevent  4096 6月  11
        16:52 job
        drwxrwxr-x. 
        2 cevent cevent  4096 6月  11
        13:35 lib
        -rw-r--r--. 
        1 cevent cevent 27625 10月 13 2016
        LICENSE
        -rw-r--r--. 
        1 cevent cevent   249 9月  26
        2016 NOTICE
        -rw-r--r--. 
        1 cevent cevent  2520 9月  26
        2016 README.md
        -rw-r--r--. 
        1 cevent cevent  1585 10月 11 2016 RELEASE-NOTES
        drwxrwxr-x. 
        2 cevent cevent  4096 6月  11
        13:35 tools
        [cevent@hadoop207 apache-flume-1.7.0]$ vim
        job/flume-netcat-logger.conf
        #
        Name the components on this agent
        a1.sources
        = r1
        a1.sinks
        = k1
        a1.channels
        = c1
         
        #
        Describe/configure the source
        a1.sources.r1.type
        = netcat
        a1.sources.r1.bind
        = localhost
        a1.sources.r1.port
        = 44444
         
        #
        Describe the sink
        a1.sinks.k1.type
        = logger
         
        #
        Use a channel which buffers events in memory
        a1.channels.c1.type
        = memory
        a1.channels.c1.capacity
        = 1000
        a1.channels.c1.transactionCapacity
        = 100
         
        #
        Bind the source and sink to the channel
        a1.sources.r1.channels
        = c1
        a1.sinks.k1.channel
        = c1
         
        ~
        "job/flume-netcat-logger.conf" [] 22L, 495C 已写入                     
        [cevent@hadoop207 apache-flume-1.7.0]$ sudo netstat -nlp | grep 44444 查询端口是否被占用netstat
        -nlp | grep
        [sudo] password for cevent: 
        [cevent@hadoop207 apache-flume-1.7.0]$ ll bin
        总用量 36
        -rwxr-xr-x. 1 cevent cevent 12387 9月  26
        2016 flume-ng
        -rw-r--r--. 1 cevent cevent   936 9月  26 2016 flume-ng.cmd
        -rwxr-xr-x. 1 cevent cevent 14176 9月  26
        2016 flume-ng.ps1
        [cevent@hadoop207 apache-flume-1.7.0]$ bin/flume-ng agent --name a1          --conf conf/  --conf-file job/flume-netcat-logger.conf 
                                               启动bin       设置flume name  
        配置文件   自定义配置文件路径
        Info: Sourcing environment configuration script
        /opt/module/apache-flume-1.7.0/conf/flume-env.sh
        Info: Including Hadoop libraries found via
        (/opt/module/hadoop-2.7.2/bin/hadoop) for HDFS access
        Info: Including Hive libraries found via
        (/opt/module/hive-1.2.1) for Hive access
        + exec /opt/module/jdk1.7.0_79/bin/java -Xmx20m
        -cp
        '/opt/module/apache-flume-1.7.0/conf:/opt/module/apache-flume-1.7.0/lib/*:/opt/module/hadoop-2.7.2/etc/hadoop:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/common/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/*:/opt/module/hadoop-2.7.2/contrib/capacity-scheduler/*.jar:/opt/module/hive-1.2.1/lib/*'
        -Djava.library.path=:/opt/module/hadoop-2.7.2/lib/native
        org.apache.flume.node.Application --name a1 --conf-file
        job/flume-netcat-logger.conf
        SLF4J: Class path contains multiple SLF4J
        bindings.
        SLF4J: Found binding in [jar:file:/opt/module/apache-flume-1.7.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in
        [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
        for an explanation.
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161

      9.5决定logger存储的位置

      
      
      
       
        
        [cevent@hadoop207 apache-flume-1.7.0]$ cd conf/
        [cevent@hadoop207 conf]$ ll
        总用量 16
        -rw-r--r--. 1 cevent cevent 1661 9月  26 2016 flume-conf.properties.template
        -rw-r--r--. 1 cevent cevent 1455 9月  26 2016 flume-env.ps1.template
        -rw-r--r--. 1 cevent cevent 1563 6月  11 13:41 flume-env.sh
        -rw-r--r--. 1 cevent cevent 3107 9月  26 2016 log4j.properties
        [cevent@hadoop207 conf]$ vim
        log4j.properties 
        #
        # Licensed to the Apache Software
        Foundation (ASF) under one
        # or more contributor license
        agreements.  See the NOTICE file
        # distributed with this work for
        additional information
        # regarding copyright ownership.  The ASF licenses this file
        # to you under the Apache License,
        Version 2.0 (the
        # "License"); you may not use
        this file except in compliance
        # with the License.  You may obtain a copy of the License at
        #
        # 
        http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or
        agreed to in writing,
        # software distributed under the License
        is distributed on an
        # "AS IS" BASIS, WITHOUT
        WARRANTIES OR CONDITIONS OF ANY
        # KIND, either express or implied.  See the License for the
        # specific language governing permissions
        and limitations
        # under the License.
        #
         
        # Define some default values that can be
        overridden by system properties.
        #
        # For testing, it may also be convenient
        to specify
        # -Dflume.root.logger=DEBUG,console when
        launching flume.
         
        #flume.root.logger=DEBUG,console
        # 默认输出日志在logfile,而不是console
        flume.root.logger=INFO,LOGFILE
        #flume.log.dir=./logs
        flume.log.dir=/opt/module/apache-flume-1.7.0/loggers
         
        flume.log.file=flume.log
         
        log4j.logger.org.apache.flume.lifecycle =
        INFO
        log4j.logger.org.jboss = WARN
        log4j.logger.org.mortbay = INFO
        log4j.logger.org.apache.avro.ipc.NettyTransceiver
        = WARN
        log4j.logger.org.apache.hadoop = INFO
        log4j.logger.org.apache.hadoop.hive =
        ERROR
         
        # Define the root logger to the system
        property "flume.root.logger".
        log4j.rootLogger=${flume.root.logger}
         
         
        # 将所有日志放到flume.log.dir一个目录内 Stock log4j rolling file
        appender
        # Default log rotation configuration
        log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
        log4j.appender.LOGFILE.MaxFileSize=100MB
        log4j.appender.LOGFILE.MaxBackupIndex=10
        log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
        log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
        log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd
        MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
         
         
        # Warning: If you enable the following
        appender it will fill up your disk if you don't have a cleanup job!
        # This uses the updated rolling file
        appender from log4j-extras that supports a reliable time-based rolling
        policy.
        # See
        http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
        # 每天日志按天滚动  Add "DAILY" to flume.root.logger
        above if you want to use this
        log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
        log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
        log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
        log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
        log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
        log4j.appender.DAILY.layout.ConversionPattern=%d{dd
        MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
         
         
        # 控制台输出console
        # Add "console" to
        flume.root.logger above if you want to use this
        log4j.appender.console=org.apache.log4j.ConsoleAppender
        log4j.appender.console.target=System.err
        log4j.appender.console.layout=org.apache.log4j.PatternLayout
                                                              
        
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116

      9.6启动flume,配置log输出控制台

      
      
      
       
        
        [cevent@hadoop207 apache-flume-1.7.0]$ sudo netstat -nlp | grep 44444 查看端口是否被占用netstat -nlp | grep 44444
        [sudo] password for cevent: 
        [cevent@hadoop207 apache-flume-1.7.0]$ ll bin
        总用量 36
        -rwxr-xr-x. 1 cevent cevent 12387 9月  26 2016 flume-ng
        -rw-r--r--. 1 cevent cevent   936 9月  26 2016 flume-ng.cmd
        -rwxr-xr-x. 1 cevent cevent 14176 9月  26 2016 flume-ng.ps1
        [cevent@hadoop207 apache-flume-1.7.0]$ bin/flume-ng agent --name a1 --conf conf/ --conf-file
        job/flume-netcat-logger.conf 
                                                   启动agent  flume名  配置文件    自定义日志配置
        Info: Sourcing environment configuration
        script /opt/module/apache-flume-1.7.0/conf/flume-env.sh
        Info: Including Hadoop libraries found
        via (/opt/module/hadoop-2.7.2/bin/hadoop) for HDFS access
        Info: Including Hive libraries found via
        (/opt/module/hive-1.2.1) for Hive access
        + exec /opt/module/jdk1.7.0_79/bin/java
        -Xmx20m -cp
        '/opt/module/apache-flume-1.7.0/conf:/opt/module/apache-flume-1.7.0/lib/*:/opt/module/hadoop-2.7.2/etc/hadoop:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/common/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/*:/opt/module/hadoop-2.7.2/contrib/capacity-scheduler/*.jar:/opt/module/hive-1.2.1/lib/*'
        -Djava.library.path=:/opt/module/hadoop-2.7.2/lib/native
        org.apache.flume.node.Application --name a1 --conf-file job/flume-netcat-logger.conf
        SLF4J: Class path contains multiple SLF4J
        bindings.
        SLF4J: Found binding in
        [jar:file:/opt/module/apache-flume-1.7.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in
        [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: See
        http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39

      9.7端口监听:nc localhost 44444

      
      
      
       
        
        [cevent@hadoop207 apache-flume-1.7.0]$ cd conf/
        [cevent@hadoop207 conf]$ ll
        总用量 16
        -rw-r--r--. 1 cevent cevent 1661 9月  26 2016 flume-conf.properties.template
        -rw-r--r--. 1 cevent cevent 1455 9月  26 2016 flume-env.ps1.template
        -rw-r--r--. 1 cevent cevent 1563 6月  11 13:41 flume-env.sh
        -rw-r--r--. 1 cevent cevent 3107 9月  26 2016 log4j.properties
        [cevent@hadoop207 conf]$ vim log4j.properties 编辑日志配置
        #
        # Licensed to the Apache Software
        Foundation (ASF) under one
        # or more contributor license
        agreements.  See the NOTICE file
        # distributed with this work for
        additional information
        # regarding copyright ownership.  The ASF licenses this file
        # to you under the Apache License,
        Version 2.0 (the
        # "License"); you may not use
        this file except in compliance
        # with the License.  You may obtain a copy of the License at
        #
        # 
        http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or
        agreed to in writing,
        # software distributed under the License
        is distributed on an
        # "AS IS" BASIS, WITHOUT
        WARRANTIES OR CONDITIONS OF ANY
        # KIND, either express or implied.  See the License for the
        # specific language governing permissions
        and limitations
        # under the License.
        #
         
        # Define some default values that can be
        overridden by system properties.
        #
        # For testing, it may also be convenient
        to specify
        # -Dflume.root.logger=DEBUG,console when
        launching flume.
         
        #flume.root.logger=DEBUG,console
        # cevent:default output log to logfile,not console
        flume.root.logger=INFO,LOGFILE
        #flume.log.dir=./logs
        flume.log.dir=/opt/module/apache-flume-1.7.0/loggers
         
        flume.log.file=flume.log
         
        log4j.logger.org.apache.flume.lifecycle =
        INFO
        log4j.logger.org.jboss = WARN
        log4j.logger.org.mortbay = INFO
        log4j.logger.org.apache.avro.ipc.NettyTransceiver
        = WARN
        log4j.logger.org.apache.hadoop = INFO
        log4j.logger.org.apache.hadoop.hive =
        ERROR
         
        # Define the root logger to the system
        property "flume.root.logger".
        log4j.rootLogger=${flume.root.logger}
         
         
        # cevent:all log put on flume.log.dir. Stock log4j rolling file
        appender
        # Default log rotation configuration
        log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
        log4j.appender.LOGFILE.MaxFileSize=100MB
        log4j.appender.LOGFILE.MaxBackupIndex=10
        log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
        log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
        log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd
        MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
         
         
        # Warning: If you enable the following
        appender it will fill up your disk if you don't have a cleanup job!
        # This uses the updated rolling file
        appender from log4j-extras that supports a reliable time-based rolling
        policy.
        # See http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
        # cevent:log scall every day. 
        Add "DAILY" to flume.root.logger above if you want to use
        this
        log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
        log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
        log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
        log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
        log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
        log4j.appender.DAILY.layout.ConversionPattern=%d{dd
        MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
         
         
        # console
        # Add "console" to
        flume.root.logger above if you want to use this
        log4j.appender.console=org.apache.log4j.ConsoleAppender
        log4j.appender.console.target=System.err
        "log4j.properties" 71L, 3281C 已写入                                     
        [cevent@hadoop207 conf]$ ll
        总用量 16
        -rw-r--r--. 1 cevent cevent 1661 9月  26 2016 flume-conf.properties.template
        -rw-r--r--. 1 cevent cevent 1455 9月  26 2016 flume-env.ps1.template
        -rw-r--r--. 1 cevent cevent 1563 6月  11 13:41 flume-env.sh
        -rw-r--r--. 1 cevent cevent 3281 6月  11 17:18 log4j.properties
        [cevent@hadoop207 conf]$ cd ..
        [cevent@hadoop207 apache-flume-1.7.0]$ 
        bin/flume-ng agent
        --name a1 --conf conf/ --conf-file job/flume-netcat-logger.conf
        -Dflume.root.logger=INFO,console
                                                                           
        设置控制台输出logo
                                                                            -Dflume.root.logger=INFO,console
        Info: Sourcing environment configuration script
        /opt/module/apache-flume-1.7.0/conf/flume-env.sh
        Info: Including Hadoop libraries found via (/opt/module/hadoop-2.7.2/bin/hadoop)
        for HDFS access
        Info: Including Hive libraries found via (/opt/module/hive-1.2.1) for
        Hive access
        + exec /opt/module/jdk1.7.0_79/bin/java -Xmx20m
        -Dflume.root.logger=INFO,console -cp '/opt/module/apache-flume-1.7.0/conf:/opt/module/apache-flume-1.7.0/lib/*:/opt/module/hadoop-2.7.2/etc/hadoop:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/common/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/hdfs/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/yarn/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/lib/*:/opt/module/hadoop-2.7.2/share/hadoop/mapreduce/*:/opt/module/hadoop-2.7.2/contrib/capacity-scheduler/*.jar:/opt/module/hive-1.2.1/lib/*'
        -Djava.library.path=:/opt/module/hadoop-2.7.2/lib/native
        org.apache.flume.node.Application --name a1 --conf-file
        job/flume-netcat-logger.conf
        SLF4J: Class path contains multiple SLF4J bindings.
        SLF4J: Found binding in
        [jar:file:/opt/module/apache-flume-1.7.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in
        [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
        explanation.
        2020-06-11 17:20:04,619 (lifecycleSupervisor-1-0) [INFO -
        org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)]
        Configuration provider starting
        2020-06-11 17:20:04,623 (conf-file-poller-0) [INFO -
        org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)]
        Reloading configuration file:job/flume-netcat-logger.conf
        2020-06-11 17:20:04,629 (conf-file-poller-0) [INFO -
        org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)]
        Added sinks: k1 Agent: a1
        2020-06-11 17:20:04,629 (conf-file-poller-0) [INFO -
        org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]
        Processing:k1
        2020-06-11 17:20:04,629 (conf-file-poller-0) [INFO -
        org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)]
        Processing:k1
        2020-06-11 17:20:04,640 (conf-file-poller-0) [INFO -
        org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)]
        Post-validation flume configuration contains configuration for agents: [a1]
        2020-06-11 17:20:04,641 (conf-file-poller-0) [INFO -
        org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:147)]
        Creating channels
        2020-06-11 17:20:04,653 (conf-file-poller-0) [INFO -
        org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)]
        Creating instance of channel c1 type memory
        2020-06-11 17:20:04,656 (conf-file-poller-0) [INFO -
        org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:201)]
        Created channel c1
        2020-06-11 17:20:04,657 (conf-file-poller-0) [INFO -
        org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)]
        Creating instance of source r1, type netcat
        2020-06-11 17:20:04,666 (conf-file-poller-0) [INFO -
        org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)]
        Creating instance of sink: k1, type: logger
        2020-06-11 17:20:04,669 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:116)]
        Channel c1 connected to [r1, k1]
        2020-06-11 17:20:04,675 (conf-file-poller-0) [INFO -
        org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting
        new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: {
        source:org.apache.flume.source.NetcatSource{name:r1,state:IDLE} }}
        sinkRunners:{k1=SinkRunner: {
        policy:org.apache.flume.sink.DefaultSinkProcessor@6a8bda08 counterGroup:{
        name:null counters:{} } }}
        channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
        2020-06-11 17:20:04,682 (conf-file-poller-0) [INFO -
        org.apache.flume.node.Application.startAllComponents(Application.java:144)]
        Starting Channel c1
        2020-06-11 17:20:04,724 (lifecycleSupervisor-1-0) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)]
        Monitored counter group for type: CHANNEL, name: c1: Successfully registered
        new MBean.
        2020-06-11 17:20:04,725 (lifecycleSupervisor-1-0) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)]
        Component type: CHANNEL, name: c1 started
        2020-06-11 17:20:04,726 (conf-file-poller-0) [INFO -
        org.apache.flume.node.Application.startAllComponents(Application.java:171)]
        Starting Sink k1
        2020-06-11 17:20:04,727 (conf-file-poller-0) [INFO -
        org.apache.flume.node.Application.startAllComponents(Application.java:182)]
        Starting Source r1
        2020-06-11 17:20:04,728 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:155)]
        Source starting
        2020-06-11 17:20:04,762 (lifecycleSupervisor-1-4) [INFO -
        org.apache.flume.source.NetcatSource.start(NetcatSource.java:169)] Created
        serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]
        2020-06-11 17:21:00,410
        (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO -
        org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: {
        headers:{} body: 63 65 76 65 6E 74                               cevent }
        2020-06-11 17:21:30,419
        (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO -
        org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: {
        headers:{} body: 65 63 68 6F                                     echo }
        ^C2020-06-11 17:23:55,629 (agent-shutdown-hook) [INFO -
        org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:78)]
        Stopping lifecycle supervisor 10
        2020-06-11 17:23:55,630 (agent-shutdown-hook) [INFO -
        org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop(PollingPropertiesFileConfigurationProvider.java:84)]
        Configuration provider stopping
        2020-06-11 17:23:55,631 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:149)]
        Component type: CHANNEL, name: c1 stopped
        2020-06-11 17:23:55,631 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:155)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.start.time ==
        1591867204725
        2020-06-11 17:23:55,631 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:161)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.stop.time ==
        1591867435631
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.capacity == 1000
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.current.size == 0
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.event.put.attempt == 2
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.event.put.success == 2
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.event.take.attempt == 35
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:177)]
        Shutdown Metric for type: CHANNEL, name: c1. channel.event.take.success == 2
        2020-06-11 17:23:55,632 (agent-shutdown-hook) [INFO -
        org.apache.flume.source.NetcatSource.stop(NetcatSource.java:196)] Source
        stopping
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181
      • 182
      • 183
      • 184
      • 185
      • 186
      • 187
      • 188
      • 189
      • 190
      • 191
      • 192
      • 193
      • 194
      • 195
      • 196
      • 197
      • 198
      • 199
      • 200
      • 201
      • 202
      • 203
      • 204
      • 205
      • 206
      • 207
      • 208
      • 209
      • 210
      • 211
      • 212
      • 213
      • 214
      • 215
      • 216
      • 217
      • 218
      • 219
      • 220
      • 221
      • 222
      • 223
      • 224
      • 225
      • 226
      • 227
      • 228
      • 229
      • 230
      • 231
      • 232
      • 233
      • 234
      • 235
      • 236
      • 237
      • 238
      • 239
      • 240
      • 241
      • 242
      • 243
      • 244
      • 245
      • 246
      • 247
      • 248
      • 249
      • 250
      • 251
      • 252
      • 253
      • 254
      • 255
      • 256

      【nc传输】

      [cevent@hadoop207 ~]$ nc localhost 44444
      cevent
      OK
      echo
      OK

      10.kafka配置

      10.1准备包

      kafka

      10.1配置kafka

      
      
      
       
        
        [cevent@hadoop210 soft]$ tar -zxvf kafka_2.11-0.11.0.0.tgz
        [cevent@hadoop210 soft]$ mv kafka_2.11-0.11.0.0 /opt/module/
         
        [cevent@hadoop210 soft]$ cd /opt/module/
        [cevent@hadoop210 module]$ ll
        总用量 24
        drwxrwxr-x.  9 cevent cevent 4096 6月  28 18:04 flume-1.7.0
        drwxr-xr-x. 12 cevent cevent 4096 6月  28 15:21 hadoop-2.7.2
        drwxrwxr-x.  9 cevent cevent 4096 6月  28 16:35 hive-1.2.1
        drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
        drwxr-xr-x.  6 cevent cevent 4096 6月  23 2017 kafka_2.11-0.11.0.0
        drwxr-xr-x. 11 cevent cevent 4096 6月  28 15:50 zookeeper-3.4.10
        [cevent@hadoop210 module]$ cd
        kafka_2.11-0.11.0.0/
        [cevent@hadoop210 kafka_2.11-0.11.0.0]$
        ll
        总用量 52
        drwxr-xr-x. 3 cevent cevent  4096 6月  23 2017 bin
        drwxr-xr-x. 2 cevent cevent  4096 6月  23 2017 config
        drwxr-xr-x. 2 cevent cevent  4096 6月  28 18:18 libs
        -rw-r--r--. 1 cevent cevent 28824 6月  23 2017 LICENSE
        -rw-r--r--. 1 cevent cevent   336 6月  23 2017 NOTICE
        drwxr-xr-x. 2 cevent cevent  4096 6月  23 2017 site-docs
        [cevent@hadoop210 kafka_2.11-0.11.0.0]$
        vim config/server.properties 
        # Licensed to the Apache Software
        Foundation (ASF) under one or more
        # contributor license agreements.  See the NOTICE file distributed with
        # this work for additional information regarding
        copyright ownership.
        # The ASF licenses this file to You under
        the Apache License, Version 2.0
        # (the "License"); you may not
        use this file except in compliance with
        # the License.  You may obtain a copy of the License at
        #
        #   
        http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or
        agreed to in writing, software
        # distributed under the License is
        distributed on an "AS IS" BASIS,
        # WITHOUT WARRANTIES OR CONDITIONS OF ANY
        KIND, either express or implied.
        # See the License for the specific
        language governing permissions and
        # limitations under the License.
         
        # see kafka.server.KafkaConfig for
        additional details and defaults
         
        ############################# Server
        Basics #############################
         
        # The id of the broker. This must be set
        to a unique integer for each broker.
        broker.id=210
         
        # Switch to enable topic deletion or not,
        default value is false
        #delete.topic.enable=true
         
        ############################# Socket
        Server Settings #############################
         
        # The address the socket server listens
        on. It will get the value returned from
        #
        java.net.InetAddress.getCanonicalHostName() if not configured.
        #  
        FORMAT:
        #    
        listeners = listener_name://host_name:port
        #  
        EXAMPLE:
        #    
        listeners = PLAINTEXT://your.host.name:9092
        #listeners=PLAINTEXT://:9092
         
        # Hostname and port the broker will
        advertise to producers and consumers. If not set,
        # it uses the value for
        "listeners" if configured. 
        Otherwise, it will use the value
        # returned from java.net.InetAddress.getCanonicalHostName().
        #advertised.listeners=PLAINTEXT://your.host.name:9092
         
        # Maps listener names to security
        protocols, the default is for them to be the same. See the config
        documentation for more details
        #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
         
        # The number of threads that the server
        uses for receiving requests from the network and sending responses to the
        network
        num.network.threads=3
         
        # The number of threads that the server
        uses for processing requests, which may include disk I/O
        num.io.threads=8
         
        # The send buffer (SO_SNDBUF) used by the
        socket server
        socket.send.buffer.bytes=102400
         
        # The receive buffer (SO_RCVBUF) used by
        the socket server
        socket.receive.buffer.bytes=102400
         
        # The maximum size of a request that the
        socket server will accept (protection against OOM)
        socket.request.max.bytes=104857600
         
         
        ############################# Log Basics #############################
         
        # A comma seperated list of directories
        under which to store log files
        log.dirs=/opt/module/kafka_2.11-0.11.0.0/logs
         
        # The default number of log partitions
        per topic. More partitions allow greater
        # parallelism for consumption, but this
        will also result in more files across
        # the brokers.
        num.partitions=1
         
        # The number of threads per data
        directory to be used for log recovery at startup and flushing at shutdown.
        # This value is recommended to be
        increased for installations with data dirs located in RAID array.
        num.recovery.threads.per.data.dir=1
         
        ############################# Internal
        Topic Settings 
        #############################
        # The replication factor for the group
        metadata internal topics "__consumer_offsets" and "__transaction_state"
        # For anything other than development
        testing, a value greater than 1 is recommended for to ensure availability
        such as 3.
        offsets.topic.replication.factor=1
        transaction.state.log.replication.factor=1
        transaction.state.log.min.isr=1
         
        ############################# Log Flush
        Policy #############################
         
        # Messages are immediately written to the
        filesystem but by default we only fsync() to sync
        # the OS cache lazily. The following
        configurations control the flush of data to disk.
        # There are a few important trade-offs
        here:
        #   
        1. Durability: Unflushed data may be lost if you are not using
        replication.
        #   
        2. Latency: Very large flush intervals may lead to latency spikes when
        the flush does occur as there will be a lot of data to flush.
        #   
        3. Throughput: The flush is generally the most expensive operation,
        and a small flush interval may lead to exceessive seeks.
        # The settings below allow one to
        configure the flush policy to flush data after a period of time or
        # every N messages (or both). This can be
        done globally and overridden on a per-topic basis.
         
        # The number of messages to accept before
        forcing a flush of data to disk
        #log.flush.interval.messages=10000
         
        # The maximum amount of time a message
        can sit in a log before we force a flush
        #log.flush.interval.ms=1000
         
        ############################# Log
        Retention Policy #############################
         
        # The following configurations control
        the disposal of log segments. The policy can
        # be set to delete segments after a
        period of time, or after a given size has accumulated.
        # A segment will be deleted whenever
        *either* of these criteria are met. Deletion always happens
        # from the end of the log.
         
        # The minimum age of a log file to be
        eligible for deletion due to age
        log.retention.hours=168
         
        # A size-based retention policy for logs.
        Segments are pruned from the log as long as the remaining
        # segments don't drop below
        log.retention.bytes. Functions independently of log.retention.hours.
        #log.retention.bytes=1073741824
         
        # The maximum size of a log segment file.
        When this size is reached a new log segment will be created.
        log.segment.bytes=1073741824
         
        # The interval at which log segments are
        checked to see if they can be deleted according
        # to the retention policies
        log.retention.check.interval.ms=300000
         
        ############################# Zookeeper
        #############################
         
        # Zookeeper connection string (see
        zookeeper docs for details).
        # This is a comma separated host:port
        pairs, each corresponding to a zk
        # server. e.g.
        "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
        # You can also append an optional chroot
        string to the urls to specify the
        # root directory for all kafka znodes.
        # zookeeper.connect=localhost:2181
        zookeeper.connect=hadoop210.cevent.com:2181,hadoop211.cevent.com:2181,hadoop212.cevent.com:2181
         
        # Timeout in ms for connecting to
        zookeeper
        zookeeper.connection.timeout.ms=6000
         
         
        ############################# Group
        Coordinator Settings #############################
         
        # The following configuration specifies the
        time, in milliseconds, that the GroupCoordinator will delay the initial
        consumer rebalance.
        # The rebalance will be further delayed
        by the value of group.initial.rebalance.delay.ms as new members join the
        group, up to a maximum of max.poll.interval.ms.
        # The default value for this is 3
        seconds.
        # We override this to 0 here as it makes
        for a better out-of-the-box experience for development and testing.
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181
      • 182
      • 183
      • 184
      • 185
      • 186
      • 187
      • 188
      • 189
      • 190
      • 191
      • 192
      • 193
      • 194
      • 195
      • 196
      • 197
      • 198
      • 199
      • 200
      • 201
      • 202
      • 203
      • 204
      • 205
      • 206
      • 207
      • 208
      • 209
      • 210
      • 211
      • 212
      • 213
      • 214
      • 215
      • 216
      • 217
      • 218
      • 219
      • 220
      • 221
      • 222
      • 223
      • 224
      • 225
      • 226
      • 227
      • 228
      • 229
      • 230
      • 231
      • 232
      • 233
      • 234
      • 235
      • 236
      • 237
      • 238
      • 239
      • 240
      • 241
      • 242
      • 243
      • 244
      • 245
      • 246
      • 247
      • 248
      • 249
      • 250

      10.2配置环境变量hadoop207

      
      
      
       
        
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ sudo vim /etc/profile
        [sudo] password for cevent: 
        fi
         
        # Path manipulation
        if [ "$EUID" = "0" ];
        then
           
        pathmunge /sbin
           
        pathmunge /usr/sbin
           
        pathmunge /usr/local/sbin
        else
           
        pathmunge /usr/local/sbin after
           
        pathmunge /usr/sbin after
           
        pathmunge /sbin after
        fi
         
        HOSTNAME=`/bin/hostname 2>/dev/null`
        HISTSIZE=1000
        if [ "$HISTCONTROL" =
        "ignorespace" ] ; then
           
        export HISTCONTROL=ignoreboth
        else
           
        export HISTCONTROL=ignoredups
        fi
         
        export PATH USER LOGNAME MAIL HOSTNAME
        HISTSIZE HISTCONTROL
         
        # By default, we want umask to get set.
        This sets it for login shell
        # Current threshold for system reserved
        uid/gids is 200
        # You could check uidgid reservation
        validity in
        # /usr/share/doc/setup-*/uidgid file
        if [ $UID -gt 199 ] && [
        "`id -gn`" = "`id -un`" ]; then
           
        umask 002
        else
           
        umask 022
        fi
         
        for i in /etc/profile.d/*.sh ; do
           
        if [ -r "$i" ]; then
               
        if [ "${-#*i}" != "$-" ]; then
                    . "$i"
               
        else
                    . "$i" >/dev/null
        2>&1
               
        fi
           
        fi
        done
         
        unset i
        unset -f pathmunge
        #JAVA_HOME
        export JAVA_HOME=/opt/module/jdk1.7.0_79
        export PATH=$PATH:$JAVA_HOME/bin
         
        #HADOOP_HOME
        export HADOOP_HOME=/opt/module/hadoop-2.7.2
        export PATH=$PATH:$HADOOP_HOME/bin
        export PATH=$PATH:$HADOOP_HOME/sbin
         
        #HIVE_HOME
        export HIVE_HOME=/opt/module/hive-1.2.1
         
        export PATH=$PATH:$HIVE_HOME/bin
         
        #FLUME_HOME
        export FLUME_HOME=/opt/module/apache-flume-1.7.0
        export PATH=$PATH:$FLUME_HOME/bin
         
        #ZOOKEEPER_HOME
        export ZOOKEEPER_HOME=/opt/module/zookeeper-3.4.10
        export PATH=$PATH:$ZOOKEEPER_HOME/bin
         
        #KAFKA_HOME
        export KAFKA_HOME=/opt/module/kafka_2.11-0.11.0.0
        export PATH=$PATH:$KAFKA_HOME/bin
         
        执行同步
        xsync
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107

      10.3 Source KAFKA配置

      
      
      
       
        
         
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ source /etc/profile
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ echo $KAFKA_HOME
        /opt/module/kafka_2.11-0.11.0.0
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13

      10.4Kafka-(- = tab连击2下)命令生效

      
      
      
       
        
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ kafka-
        kafka-acls.sh                       
        kafka-reassign-partitions.sh
        kafka-broker-api-versions.sh         kafka-replay-log-producer.sh
        kafka-configs.sh                     kafka-replica-verification.sh
        kafka-console-consumer.sh            kafka-run-class.sh
        kafka-console-producer.sh            kafka-server-start.sh
        kafka-consumer-groups.sh             kafka-server-stop.sh
        kafka-consumer-offset-checker.sh     kafka-simple-consumer-shell.sh
        kafka-consumer-perf-test.sh          kafka-streams-application-reset.sh
        kafka-delete-records.sh              kafka-topics.sh
        kafka-mirror-maker.sh                kafka-verifiable-consumer.sh
        kafka-preferred-replica-election.sh  kafka-verifiable-producer.sh
        kafka-producer-perf-test.sh          
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23

      10.5启动报错,需要修改broker.id

      
      
      
       
        
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ 启动kafka
        kafka-server-start.sh
        -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties 
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$
        ll
        总用量 56
        drwxr-xr-x. 3 cevent cevent  4096 6月  23 2017 bin
        drwxr-xr-x. 2 cevent cevent  4096 6月  17 16:08 config
        drwxr-xr-x. 2 cevent cevent  4096 6月  15 13:22 libs
        -rw-r--r--. 1 cevent cevent 28824 6月  23 2017 LICENSE
        drwxrwxr-x. 2 cevent cevent  4096 6月  17 16:28 logs
        -rw-r--r--. 1 cevent cevent   336 6月  23 2017 NOTICE
        drwxr-xr-x. 2 cevent cevent  4096 6月  23 2017 site-docs
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$
        cd logs/
        [cevent@hadoop207 logs]$ ll
        总用量 12
        -rw-rw-r--. 1 cevent cevent    0 6月  17 16:28 controller.log
        -rw-rw-r--. 1 cevent cevent    0 6月  17 16:28
        kafka-authorizer.log
        -rw-rw-r--. 1 cevent cevent    0 6月  17 16:28 kafka-request.log
        -rw-rw-r--. 1 cevent cevent 1048 6月  17 16:28 kafkaServer-gc.log.0.current
        -rw-rw-r--. 1 cevent cevent  813 6月  17 16:28 kafkaServer.out
        -rw-rw-r--. 1 cevent cevent    0 6月  17 16:28 log-cleaner.log
        -rw-rw-r--. 1 cevent cevent  813 6月  17 16:28 server.log
        -rw-rw-r--. 1 cevent cevent    0 6月  17 16:28 state-change.log
        [cevent@hadoop207 logs]$ cat server.log 
        [2020-06-17 16:28:55,630] FATAL  (kafka.Kafka$)
        org.apache.kafka.common.config.ConfigException:
        Invalid value kafka207 for configuration broker.id: Not a number of type INT
                at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:713)
                at
        org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:460)
                at
        org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
                at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
                at
        kafka.server.KafkaConfig.<init>(KafkaConfig.scala:883)
                at
        kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:867)
                at
        kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:864)
                at
        kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
                at kafka.Kafka$.main(Kafka.scala:58)
                at kafka.Kafka.main(Kafka.scala)
         
        [cevent@hadoop207 module]$ cd kafka_2.11-0.11.0.0/
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ cd config/
        [cevent@hadoop207 config]$ ll
        总用量 64
        -rw-r--r--. 1 cevent cevent  906 6月  23 2017
        connect-console-sink.properties
        -rw-r--r--. 1 cevent cevent  909 6月  23 2017
        connect-console-source.properties
        -rw-r--r--. 1 cevent cevent 5807 6月  23 2017 connect-distributed.properties
        -rw-r--r--. 1 cevent cevent  883 6月  23 2017
        connect-file-sink.properties
        -rw-r--r--. 1 cevent cevent  881 6月  23 2017
        connect-file-source.properties
        -rw-r--r--. 1 cevent cevent 1111 6月  23 2017 connect-log4j.properties
        -rw-r--r--. 1 cevent cevent 2730 6月  23 2017 connect-standalone.properties
        -rw-r--r--. 1 cevent cevent 1199 6月  23 2017 consumer.properties
        -rw-r--r--. 1 cevent cevent 4696 6月  23 2017 log4j.properties
        -rw-r--r--. 1 cevent cevent 1900 6月  23 2017 producer.properties
        -rw-r--r--. 1 cevent cevent 7072 6月  17 16:08 server.properties
        -rw-r--r--. 1 cevent cevent 1032 6月  23 2017 tools-log4j.properties
        -rw-r--r--. 1 cevent cevent 1023 6月  23 2017 zookeeper.properties
        [cevent@hadoop207 config]$ vim server.properties 
         
        # The id of the broker. This must be set
        to a unique integer for each broker.
        【hadoop207】
        broker.id=7
        【hadoop208】
        broker.id=8
        【hadoop209】
        broker.id=9
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88

      10.6 重新启动kafka

      
      
      
       
        
        先启动每个zookeeper
        [cevent@hadoop208 zookeeper-3.4.10]$ bin/zkServer.sh
        start
        ZooKeeper JMX enabled by default
        Using config:
        /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
        Starting zookeeper ... STARTED
        [cevent@hadoop208 zookeeper-3.4.10]$ jps
        8227 Jps
        7894 DataNode
        8012 NodeManager
        8188 QuorumPeerMain
         
        [cevent@hadoop207 logs]$ kafka-server-start.sh -daemon
        /opt/module/kafka_2.11-0.11.0.0/config/server.properties 启动服务
        (这里$KAFKA_HOME/c 按tab可快捷呼出路径)
         
        [2020-06-17 16:48:22,346] INFO Registered
        broker 7 at path /brokers/ids/7 with addresses: EndPoint(hadoop207.cevent.com,9092,ListenerName(PLAINTEXT),PLAINTEXT)
        (kafka.utils.ZkUtils)
        [2020-06-17 16:48:22,348] WARN No
        meta.properties file under dir
        /opt/module/kafka_2.11-0.11.0.0/logs/meta.properties
        (kafka.server.BrokerMetadataCheckpoint)
        [2020-06-17 16:48:22,393] INFO Kafka
        version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser)
        [2020-06-17 16:48:22,393] INFO Kafka
        commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser)
        [2020-06-17 16:48:22,393] INFO [Kafka
        Server 7], started (kafka.server.KafkaServer)
        [cevent@hadoop207 logs]$
        kafka-server-start.sh -daemon
        /opt/module/kafka_2.11-0.11.0.0/config/server.properties 
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42

      10.7Kafka集群启动步骤

      
      
      
       
        
        [cevent@hadoop207 hadoop-2.7.2]$ sbin/start-dfs.sh 
        [cevent@hadoop207 hadoop-2.7.2]$ sbin/start-yarn.sh 
        [cevent@hadoop207 hadoop-2.7.2]$ jps
        4167 Jps
        3352 DataNode
        3237 NameNode
        3848 NodeManager
        3731 ResourceManager
        3562 SecondaryNameNode
         
        [cevent@hadoop207 ~]$ cd /opt/module/zookeeper-3.4.10/
        [cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh start
        ZooKeeper JMX enabled by default
        Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
        Starting zookeeper ... STARTED
        [cevent@hadoop207 zookeeper-3.4.10]$ jps
        3352 DataNode
        3237 NameNode
        3848 NodeManager
        3731 ResourceManager
        4223 QuorumPeerMain
        4246 Jps
        3562 SecondaryNameNode
         
        [cevent@hadoop207 ~]$ cd /opt/module/kafka_2.11-0.11.0.0/
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ 启动kafka可$KAFKA_HOME/
        kafka-server-start.sh
        -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties 
        [cevent@hadoop207 kafka_2.11-0.11.0.0]$ jps
        4539 Kafka
        3352 DataNode
        3237 NameNode
        3848 NodeManager
        3731 ResourceManager
        4223 QuorumPeerMain
        4565 Jps
        3562 SecondaryNameNode
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47

      【其他虚拟机】

      
      
      
       
        
        [cevent@hadoop208 ~]$ cd /opt/module/zookeeper-3.4.10/
        [cevent@hadoop208 zookeeper-3.4.10]$ bin/zkServer.sh start
        ZooKeeper JMX enabled by default
        Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
        Starting zookeeper ... STARTED
        [cevent@hadoop208 zookeeper-3.4.10]$ jps
        3406 QuorumPeerMain
        3235 NodeManager
        3437 Jps
         
        [cevent@hadoop208 ~]$ cd /opt/module/kafka_2.11-0.11.0.0/
        [cevent@hadoop208 kafka_2.11-0.11.0.0]$ kafka-server-start.sh -daemon config/server.properties 
        [cevent@hadoop208 kafka_2.11-0.11.0.0]$ jps
        3732 Kafka
        3406 QuorumPeerMain
        3235 NodeManager
        3797 Jps
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27

      11.HBase安装部署

      11.1准备包

      hbase
      hbase

      11.2HBase的解压

      
      
      HBase的解压
      
      [cevent@hadoop207 module]$ tar -zxvf hbase-1.3.1-bin.tar.gz  -C /opt/module/
      
      [cevent@hadoop207 soft]$ cd /opt/module/
      
      [cevent@hadoop207 module]$ ll
      
      总用量 44
      
      drwxrwxr-x. 12 cevent cevent 4096 6月 
      19 17:50 apache-flume-1.7.0
      
      drwxrwxr-x. 
      8 cevent cevent 4096 6月  19 17:53 datas
      
      drwxr-xr-x. 11 cevent cevent 4096 5月 
      22 2017 hadoop-2.7.2
      
      drwxrwxr-x. 
      3 cevent cevent 4096 6月   5 13:27 hadoop-2.7.2-snappy
      
      drwxrwxr-x.  7 cevent cevent 4096
      6月  19 23:09 hbase-1.3.1
      
      drwxrwxr-x. 10 cevent cevent 4096 5月 
      22 13:34 hive-1.2.1
      
      drwxr-xr-x. 
      8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
      
      drwxr-xr-x. 
      7 cevent cevent 4096 6月  17 18:23 kafka_2.11-0.11.0.0
      
      drwxrwxr-x. 
      2 cevent cevent 4096 6月  19 18:22 kafka-monitor
      
      -rw-rw-r--. 
      1 cevent cevent   23 6月 
      16 21:11 xsync.txt
      
      drwxr-xr-x. 11 cevent cevent 4096 6月 
      17 11:54 zookeeper-3.4.10
      
       
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48

      11.3hbase-env.sh修改内容

      
      
      
       
        
        [cevent@hadoop207 module]$ cd hbase-1.3.1/
        [cevent@hadoop207 hbase-1.3.1]$ ll
        总用量 348
        drwxr-xr-x.  4 cevent cevent   4096 4月   5 2017 bin
        -rw-r--r--.  1 cevent cevent 148959 4月   7 2017 CHANGES.txt
        drwxr-xr-x.  2 cevent cevent   4096 4月   5 2017 conf
        drwxr-xr-x. 12 cevent cevent   4096 4月   7 2017 docs
        drwxr-xr-x.  7 cevent cevent   4096 4月   7 2017 hbase-webapps
        -rw-r--r--.  1 cevent cevent    261 4月   7 2017 LEGAL
        drwxrwxr-x.  3 cevent cevent   4096 6月  19 23:09 lib
        -rw-r--r--.  1 cevent cevent 130696 4月   7 2017 LICENSE.txt
        -rw-r--r--.  1 cevent cevent  43258 4月   7 2017 NOTICE.txt
        -rw-r--r--.  1 cevent cevent   1477 9月  21 2016 README.txt
        [cevent@hadoop207 hbase-1.3.1]$ cd conf/
        [cevent@hadoop207 conf]$ ll
        总用量 40
        -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
        -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
        -rw-r--r--. 1 cevent cevent 7468 11月  7 2016 hbase-env.sh
        -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
        -rw-r--r--. 1 cevent cevent  934 9月  21 2016 hbase-site.xml
        -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
        -rw-r--r--. 1 cevent cevent   10 12月  1 2015 regionservers
        [cevent@hadoop207 conf]$ vim hbase-env.sh 
        #
        #/**
        # * Licensed to the Apache Software
        Foundation (ASF) under one
        # * or more contributor license
        agreements.  See the NOTICE file
        # * distributed with this work for
        additional information
        # * regarding copyright ownership.  The ASF licenses this file
        # * to you under the Apache License,
        Version 2.0 (the
        # * "License"); you may not use
        this file except in compliance
        # * with the License.  You may obtain a copy of the License at
        # *
        # *    
        http://www.apache.org/licenses/LICENSE-2.0
        # *
        # * Unless required by applicable law or
        agreed to in writing, software
        # * distributed under the License is
        distributed on an "AS IS" BASIS,
        # * WITHOUT WARRANTIES OR CONDITIONS OF
        ANY KIND, either express or implied.
        # * See the License for the specific
        language governing permissions and
        # * limitations under the License.
        # */
         
        # Set environment variables here.
         
        # This script sets variables multiple
        times over the course of starting an hbase process,
        # so try to keep things idempotent unless
        you want to take an even deeper look
        # into the startup scripts (bin/hbase,
        etc.)
         
        # The java implementation to use.  Java 1.7+ required.
        # export JAVA_HOME=/usr/java/jdk1.6.0/
        #JAVA_HOME
        export
        JAVA_HOME=/opt/module/jdk1.7.0_79
        export
        PATH=$PATH:$JAVA_HOME/bin
         
        # Extra Java CLASSPATH elements.  Optional.
        # export HBASE_CLASSPATH=
         
        # The maximum amount of heap to use.
        Default is left to JVM default.
        # export HBASE_HEAPSIZE=1G
         
         
        # Uncomment below if you intend to use
        off heap cache. For example, to allocate 8G of
        # offheap, set the value to
        "8G".
        # export HBASE_OFFHEAPSIZE=1G
         
        # Extra Java runtime options.
        # Below are what we set by default.  May only work with SUN JVM.
        # For more on why as well as other
        possible settings,
        # see
        http://wiki.apache.org/hadoop/PerformanceTuning
        export
        HBASE_OPTS="-XX:+UseConcMarkSweepGC"
         
        # Configure PermSize. Only needed in
        JDK7. You can safely remove it for JDK8+
        export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS
        -XX:PermSize=128m -XX:MaxPermSize=128m"
        export
        HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m
        -XX:MaxPermSize=128m"
         
        # Uncomment one of the below three
        options to enable java garbage collection logging for the server-side
        processes.
         
        # This enables basic gc logging to the
        .out file.
        # export SERVER_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
         
        # This enables basic gc logging to its
        own file.
        # If FILE-PATH is not replaced, the log
        file(.gc) would still be generated in the HBASE_LOG_DIR .
        # export SERVER_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
         
        # This enables basic GC logging to its
        own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and
        1.7.0_2+.
        # If FILE-PATH is not replaced, the log
        file(.gc) would still be generated in the HBASE_LOG_DIR .
        # export SERVER_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>
        -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1
        -XX:GCLogFileSize=512M"
         
        # Uncomment one of the below three
        options to enable java garbage collection logging for the client processes.
         
        # This enables basic gc logging to the
        .out file.
        # export CLIENT_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
         
        # This enables basic gc logging to its
        own file.
        # If FILE-PATH is not replaced, the log
        file(.gc) would still be generated in the HBASE_LOG_DIR .
        # export CLIENT_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"
         
        # This enables basic GC logging to its
        own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and
        1.7.0_2+.
        # If FILE-PATH is not replaced, the log
        file(.gc) would still be generated in the HBASE_LOG_DIR .
        # export CLIENT_GC_OPTS="-verbose:gc
        -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>
        -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1
        -XX:GCLogFileSize=512M"
         
        # See the package documentation for
        org.apache.hadoop.hbase.io.hfile for other configurations
        # needed setting up off-heap block
        caching.
         
        # Uncomment and adjust to enable JMX
        exporting
        # See jmxremote.password and
        jmxremote.access in $JRE_HOME/lib/management to configure remote password
        access.
        # More details at:
        http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
        # NOTE: HBase provides an alternative JMX
        implementation to fix the random ports issue, please see JMX
        # section in HBase Reference Guide for
        instructions.
         
        # export
        HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false
        -Dcom.sun.management.jmxremote.authenticate=false"
        # export
        HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE
        -Dcom.sun.management.jmxremote.port=10101"
        # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS
        $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
        # export
        HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE
        -Dcom.sun.management.jmxremote.port=10103"
        # export
        HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE
        -Dcom.sun.management.jmxremote.port=10104"
        # export
        HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE
        -Dcom.sun.management.jmxremote.port=10105"
         
        # File naming hosts on which
        HRegionServers will run. 
        $HBASE_HOME/conf/regionservers by default.
        # export
        HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
         
        # Uncomment and adjust to keep all the
        Region Server pages mapped to be memory resident
        #HBASE_REGIONSERVER_MLOCK=true
        #HBASE_REGIONSERVER_UID="hbase"
         
        # File naming hosts on which backup
        HMaster will run. 
        $HBASE_HOME/conf/backup-masters by default.
        # export
        HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters
         
        # Extra ssh options.  Empty by default.
        # export HBASE_SSH_OPTS="-o
        ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
         
        # Where log files are stored.  $HBASE_HOME/logs by default.
        # export HBASE_LOG_DIR=${HBASE_HOME}/logs
         
        # Enable remote JDWP debugging of major
        HBase processes. Meant for Core Developers
        # export
        HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
        # export
        HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug
        -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
        # export
        HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
        # export
        HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug
        -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
         
        # A string representing this instance of
        hbase. $USER by default.
        # export HBASE_IDENT_STRING=$USER
         
        # The scheduling priority for daemon
        processes.  See 'man nice'.
        # export HBASE_NICENESS=10
         
        # The directory where pid files are
        stored. /tmp by default.
        # export HBASE_PID_DIR=/var/hadoop/pids
         
        # Seconds to sleep between slave commands.  Unset by default.  This
        # can be useful in large clusters, where,
        e.g., slave rsyncs can
        # otherwise arrive faster than the master
        can service them.
        # export HBASE_SLAVE_SLEEP=0.1
         
        # Tell HBase whether it should manage
        it's own instance of Zookeeper or not.
        # export HBASE_MANAGES_ZK=true  取消HBase默认开启自己的zookeeper
        export
        HBASE_MANAGES_ZK=false
        # The default log rolling policy is RFA,
        where the log file is rolled as per the size defined for the
        # RFA appender. Please refer to the
        log4j.properties file to see more details on this appender.
        # In case one needs to do log rolling on
        a date change, one should set the environment property
        # HBASE_ROOT_LOGGER to
        "<DESIRED_LOG LEVEL>,DRFA".
        # For example:
        # HBASE_ROOT_LOGGER=INFO,DRFA
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181
      • 182
      • 183
      • 184
      • 185
      • 186
      • 187
      • 188
      • 189
      • 190
      • 191
      • 192
      • 193
      • 194
      • 195
      • 196
      • 197
      • 198
      • 199
      • 200
      • 201
      • 202
      • 203
      • 204
      • 205
      • 206
      • 207
      • 208
      • 209
      • 210
      • 211
      • 212
      • 213
      • 214
      • 215
      • 216
      • 217
      • 218
      • 219
      • 220
      • 221
      • 222
      • 223
      • 224
      • 225
      • 226
      • 227
      • 228
      • 229
      • 230
      • 231
      • 232
      • 233
      • 234
      • 235
      • 236
      • 237
      • 238
      • 239
      • 240
      • 241
      • 242
      • 243
      • 244
      • 245
      • 246
      • 247
      • 248
      • 249
      • 250
      • 251
      • 252
      • 253
      • 254
      • 255
      • 256
      • 257
      • 258
      • 259
      • 260
      • 261
      • 262
      • 263
      • 264
      • 265
      • 266
      • 267

      11.4hbase-site.xml修改内容

      
      
      
       
        
        [cevent@hadoop207 conf]$ ll
        总用量 40
        -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
        -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
        -rw-r--r--. 1 cevent cevent 7497 6月  19 23:17 hbase-env.sh
        -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
        -rw-r--r--. 1 cevent cevent  934 9月  21 2016 hbase-site.xml
        -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
        -rw-r--r--. 1 cevent cevent   10 12月  1 2015 regionservers
        [cevent@hadoop207 conf]$ vim hbase-site.xml 
        <configuration>
        <?xml version="1.0"?>
        <?xml-stylesheet type="text/xsl"
        href="configuration.xsl"?>
        <!--
        /**
         *
         *
        Licensed to the Apache Software Foundation (ASF) under one
         *
        or more contributor license agreements. 
        See the NOTICE file
         *
        distributed with this work for additional information
         *
        regarding copyright ownership.  The ASF
        licenses this file
         *
        to you under the Apache License, Version 2.0 (the
         *
        "License"); you may not use this file except in compliance
         *
        with the License.  You may obtain a
        copy of the License at
         *
         *    
        http://www.apache.org/licenses/LICENSE-2.0
         *
         *
        Unless required by applicable law or agreed to in writing, software
         *
        distributed under the License is distributed on an "AS IS" BASIS,
         *
        WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
         *
        See the License for the specific language governing permissions and
         *
        limitations under the License.
         */
        -->
        <configuration>
                <property>
                       
        <name>hbase.root.dir</name>
                       
        <value>hdfs://hadoop207.cevent.com:8020/HBase</value>
                </property>
         
                <property>
                       
        <name>hbase.cluster.distributed</name>
                       
        <value>true</value>
                </property>
         
           <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
                <property>
                       
        <name>hbase.master.port</name>
                       
        <value>16000</value>
                </property>
         
                <property>  
                       
        <name>hbase.zookeeper.quorum</name>
                    
        <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
                </property>
         
                <property>  
                       
        <name>hbase.zookeeper.property.dataDir</name>
                    
        <value>/opt/module/zookeeper-3.4.10/data/zkData</value>
                </property>
        </configuration>
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97

      11.5regionservers:

      
      
      regionservers:
      
      [cevent@hadoop207 conf]$ vim regionservers 
      
      hadoop207.cevent.com
      
      hadoop208.cevent.com
      
      hadoop209.cevent.com
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12

      11.6添加软连接core/hdfs-site

      
      
      
       
        
        [cevent@hadoop207 conf]$ ll
        总用量 40
        -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
        -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
        -rw-r--r--. 1 cevent cevent 7497 6月  19 23:17 hbase-env.sh
        -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
        -rw-r--r--. 1 cevent cevent 1586 6月  19 23:23 hbase-site.xml
        -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
        -rw-r--r--. 1 cevent cevent   63 6月  19 23:26 regionservers
        [cevent@hadoop207 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml 
        [cevent@hadoop207 conf]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml 
        [cevent@hadoop207 conf]$ ll
        总用量 40
        lrwxrwxrwx. 1 cevent cevent   49 6月  19 23:29 
        core-site.xml -> /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml
        -rw-r--r--. 1 cevent cevent 1811 9月  21 2016 hadoop-metrics2-hbase.properties
        -rw-r--r--. 1 cevent cevent 4537 11月  7 2016 hbase-env.cmd
        -rw-r--r--. 1 cevent cevent 7497 6月  19 23:17 hbase-env.sh
        -rw-r--r--. 1 cevent cevent 2257 9月  21 2016 hbase-policy.xml
        -rw-r--r--. 1 cevent cevent 1586 6月  19 23:23 hbase-site.xml
        lrwxrwxrwx. 1 cevent cevent   49 6月  19 23:29 
        hdfs-site.xml -> /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml
        -rw-r--r--. 1 cevent cevent 4722 4月   5 2017 log4j.properties
        -rw-r--r--. 1 cevent cevent   63 6月  19 23:26 regionservers
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33

      11.7HBase远程发送到其他集群

      
      
      
       
        
        [cevent@hadoop207 module]$ xsync hbase-1.3.1/
         
        [cevent@hadoop208 module]$ ll
        总用量 24
        drwxr-xr-x. 12 cevent cevent 4096 6月  16 21:35 hadoop-2.7.2
        drwxrwxr-x.  7 cevent cevent
        4096 6月  19 23:31
        hbase-1.3.1
        drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
        drwxr-xr-x.  7 cevent cevent 4096 6月  18 09:50 kafka_2.11-0.11.0.0
        -rw-rw-r--.  1 cevent cevent   23 6月  16 21:11 xsync.txt
        drwxr-xr-x. 11 cevent cevent 4096 6月  17 13:36 zookeeper-3.4.10
         
        [cevent@hadoop209 ~]$ cd /opt/module/
        [cevent@hadoop209 module]$ ll
        总用量 24
        drwxr-xr-x. 12 cevent cevent 4096 6月  16 21:37 hadoop-2.7.2
        drwxrwxr-x.  7 cevent cevent
        4096 6月  19 23:33
        hbase-1.3.1
        drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
        drwxr-xr-x.  7 cevent cevent 4096 6月  18 09:51 kafka_2.11-0.11.0.0
        -rw-rw-r--.  1 cevent cevent   23 6月  16 21:11 xsync.txt
        drwxr-xr-x. 11 cevent cevent 4096 6月  17 13:36 zookeeper-3.4.10
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33

      11.8HBase服务的启动

      
      
      
       
        
        [cevent@hadoop207 hbase-1.3.1]$ bin/hbase-daemon.sh start master  启动master
        starting master, logging to /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-master-hadoop207.cevent.com.out
         
        [cevent@hadoop207 hbase-1.3.1]$ bin/hbase-daemons.sh start regionserver 启动regionserver
        hadoop207.cevent.com: regionserver
        running as process 4993. Stop it first.
        hadoop208.cevent.com: starting
        regionserver, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop208.cevent.com.out
        hadoop209.cevent.com: starting
        regionserver, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop209.cevent.com.out
        [cevent@hadoop207 hbase-1.3.1]$ jps
        5471 HMaster
        4197 NodeManager
        4542 QuorumPeerMain
        3580 NameNode
        5544 Jps
        3878 SecondaryNameNode
        3696 DataNode
        4079 ResourceManager
        4993 HRegionServer
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31

      11.9【解决HMaster开启几秒后进程自动挂掉】

      
      
      
       
        
        (1)查看hadoop配置的端口
        
         
          
          [cevent@hadoop207 hadoop-2.7.2]$ cd
          etc/hadoop/
           
          [cevent@hadoop207 hadoop]$ vim
          core-site.xml 
          <?xml version="1.0"
          encoding="UTF-8"?>
          <?xml-stylesheet
          type="text/xsl" href="configuration.xsl"?>
          <!--
           
          Licensed under the Apache License, Version 2.0 (the "License");
           
          you may not use this file except in compliance with the License.
           
          You may obtain a copy of the License at
           
             
          http://www.apache.org/licenses/LICENSE-2.0
           
           
          Unless required by applicable law or agreed to in writing, software
           
          distributed under the License is distributed on an "AS IS"
          BASIS,
           
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
          implied.
           
          See the License for the specific language governing permissions and
           
          limitations under the License. See accompanying LICENSE file.
          -->
           
          <!-- Put site-specific property
          overrides in this file. -->
           
          <configuration>
                 
          <!-- 指定HDFS中NameNode地址 ,设置的hadoop207.cevent.com=hostname
          -->
                 
          <property>
                         
          <name>fs.defaultFS</name>
                          <value>hdfs://hadoop207.cevent.com:8020</value>
                 
          </property>
           
                 
          <!-- 指定tmp数据存储位置 -->
                 
          <property>
                         
          <name>hadoop.tmp.dir</name>
                         
          <value>/opt/module/hadoop-2.7.2/data/tmp</value>
                 
          </property>
           
           
          </configuration>
           
          
         
        
         
        (2)更改HBase配置的端口
        
         
          
          [cevent@hadoop207 hbase-1.3.1]$ vim
          conf/hbase-site.xml 
          <?xml version="1.0"?>
          <?xml-stylesheet
          type="text/xsl" href="configuration.xsl"?>
          <!--
          /**
           *
           * Licensed to the Apache Software
          Foundation (ASF) under one
           * or more contributor license
          agreements.  See the NOTICE file
           * distributed with this work for
          additional information
           * regarding copyright ownership.  The ASF licenses this file
           * to you under the Apache License, Version
          2.0 (the
           * "License"); you may not use
          this file except in compliance
           * with the License.  You may obtain a copy of the License at
           *
           *    
          http://www.apache.org/licenses/LICENSE-2.0
           *
           * Unless required by applicable law or
          agreed to in writing, software
           * distributed under the License is
          distributed on an "AS IS" BASIS,
           * WITHOUT WARRANTIES OR CONDITIONS OF ANY
          KIND, either express or implied.
           * See the License for the specific language
          governing permissions and
           * limitations under the License.
           */
          -->
          <configuration>
                 
          <property>
                         
          <name>hbase.rootdir</name>
                          <value>hdfs://hadoop207.cevent.com:8020/HBase</value>
                 
          </property>
           
                 
          <property>
                         
          <name>hbase.cluster.distributed</name>
                         
          <value>true</value>
                 
          </property>
           
            
          <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
                 
          <property>
                         
          <name>hbase.master.port</name>
                         
          <value>16000</value>
                 
          </property>
           
                 
          <property>  
                         
          <name>hbase.zookeeper.quorum</name>
                      
          <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
                 
          </property>
           
                 
          <property>  
                         
          <name>hbase.zookeeper.property.dataDir</name>
                      
          <value>/opt/module/zookeeper-3.4.10/data/zkData</value>
                 
          </property>
          </configuration>
          
         
        
         
        [cevent@hadoop207 hbase-1.3.1]$ jps
        10056 Jps
        9430 HMaster
        4197 NodeManager
        8151 ZooKeeperMain
        4542 QuorumPeerMain
        3580 NameNode
        3878 SecondaryNameNode
        3696 DataNode
        4079 ResourceManager
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140
      • 141
      • 142
      • 143
      • 144
      • 145
      • 146
      • 147
      • 148
      • 149
      • 150
      • 151
      • 152
      • 153
      • 154
      • 155
      • 156
      • 157
      • 158
      • 159
      • 160
      • 161
      • 162
      • 163
      • 164
      • 165
      • 166
      • 167
      • 168
      • 169
      • 170
      • 171
      • 172
      • 173
      • 174
      • 175
      • 176
      • 177
      • 178
      • 179
      • 180
      • 181

      12.启动成功后,可以通过“host:port”的方式来访问HBase管理页面,例如:

      访问页面:http://hadoop207.cevent.com:16010/master-status

      hbase

      13.hive-hbase整合包

      handler
      receive

      13.1添加依赖-软链接

      
      
      
       
        
        ln -s
        $HBASE_HOME/lib/hbase-common-1.3.1.jar  $HIVE_HOME/lib/hbase-common-1.3.1.jar
      
        ln -s $HBASE_HOME/lib/hbase-server-1.3.1.jar
        $HIVE_HOME/lib/hbase-server-1.3.1.jar
      
        ln -s $HBASE_HOME/lib/hbase-client-1.3.1.jar
        $HIVE_HOME/lib/hbase-client-1.3.1.jar
      
        ln -s $HBASE_HOME/lib/hbase-protocol-1.3.1.jar
        $HIVE_HOME/lib/hbase-protocol-1.3.1.jar
      
        ln -s $HBASE_HOME/lib/hbase-it-1.3.1.jar
        $HIVE_HOME/lib/hbase-it-1.3.1.jar
      
        ln -s
        $HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar $HIVE_HOME/lib/htrace-core-3.1.0-incubating.jar
      
        ln -s
        $HBASE_HOME/lib/hbase-hadoop2-compat-1.3.1.jar
        $HIVE_HOME/lib/hbase-hadoop2-compat-1.3.1.jar
      
        ln -s $HBASE_HOME/lib/hbase-hadoop-compat-1.3.1.jar
        $HIVE_HOME/lib/hbase-hadoop-compat-1.3.1.jar
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33

      13.2在hive-site.xml中修改zookeeper配置(添加property)

      
      
      
       
        
        <property>
      
          <name>hive.zookeeper.quorum</name>
      
          <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
      
          <description>The list of
        ZooKeeper servers to talk to. This is only needed for read/write
        locks.</description>
      
        </property>
      
        <property>
      
          <name>hive.zookeeper.client.port</name>
      
          <value>2181</value>
      
          <description>The port of
        ZooKeeper servers to talk to. This is only needed for read/write
        locks.</description>
      
        </property>
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32

      13.3 写入hive-site完整版

      
      
      
       
        
        [cevent@hadoop207 conf]$ vim
        /opt/module/hive-1.2.1/conf/hive-site.xml 
               
        <!--修改默认的reduce生成数量-->
        <?xml version="1.0"?>
        <?xml-stylesheet
        type="text/xsl" href="configuration.xsl"?>
         
        <configuration>
         
               
        <!--mySQL数据库地址-->
               
        <property>
                  <name>javax.jdo.option.ConnectionURL</name>
                 
        <value>jdbc:mysql://hadoop207.cevent.com:3306/metastore?createDatabaseIfNotExist=true</value>
                 
        <description>JDBC connect string for a JDBC
        metastore</description>
               
        </property>
         
                <property>
                 
        <name>javax.jdo.option.ConnectionDriverName</name>
                 
        <value>com.mysql.jdbc.Driver</value>
                 
        <description>Driver class name for a JDBC
        metastore</description>
               
        </property>
         
               
        <!--mySQL数据库访问用户名及密码-->
         
               
        <property>
                 
        <name>javax.jdo.option.ConnectionUserName</name>
                 
        <value>root</value>
                 
        <description>username to use against metastore database</description>
               
        </property>
         
               
        <property>
                 
        <name>javax.jdo.option.ConnectionPassword</name>
                 
        <value>cevent</value>
                 
        <description>password to use against metastore
        database</description>
               
        </property>
         
           
            <!-- 自定义hive查询显示的信息  -->
               
        <property>
              
           <name>hive.cli.print.header</name>
                 
        <value>true</value>
               
        </property>
         
               
        <property>
         
                 
        <name>hive.cli.print.current.db</name>
                 
        <value>true</value>
               
        </property>
         
                <property>
                       
        <name>hive.zookeeper.quorum</name>
                       
        <value>hadoop207.cevent.com,hadoop208.cevent.com,hadoop209.cevent.com</value>
                       
        <description>The list of ZooKeeper servers to talk to. This is
        only needed for read/write locks.</description>
                </property>
                <property>
                       
        <name>hive.zookeeper.client.port</name>
                       
        <value>2181</value>
                       
        <description>The port of ZooKeeper servers to talk to. This is
        only needed for read/write locks.</description>
                </property>
         
        </configuration>
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110

      13.4将准备好的 hive-hbase-handler-1.2.1.jar放入lib

      
      
      1.     将准备好的 hive-hbase-handler-1.2.1.jar放入lib
      
      [cevent@hadoop207 conf]$ cd /opt/module/hive-1.2.1/lib
      
       
      
      [cevent@hadoop207 lib]$ ll hive-hbase-handler-1.2.1.jar 
      
      -rw-rw-r--. 1 cevent cevent 115935 6月 
      19 2015 hive-hbase-handler-1.2.1.jar
      
      [cevent@hadoop207 lib]$ rm hive-hbase-handler-1.2.1.jar  删除自带的handler
      
      [cevent@hadoop207 lib]$ mv /opt/soft/hive-hbase-handler-1.2.1.jar
      /opt/module/hive-1.2.1/lib/ 传入编译好的handler
      
       
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20

      13.4不更改lib出现bug

      
      
      
       
        
        1 row selected (0.366 seconds)
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> CREATE TABLE
        hive_hbase_emp_table(
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> empno int,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> ename string,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        job string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> mgr int,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> hiredate string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> sal double,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> comm double,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        deptno int)
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> STORED BY
        'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> WITH SERDEPROPERTIES
        ("hbase.columns.mapping" = ":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> TBLPROPERTIES
        ("hbase.table.name" = "hbase_emp_table");
        Error: Error while processing statement:
        FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
        org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
        (state=08S01,code=1)
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42

      13.5 删除hdfs文件

      
      
      
       
        
        [cevent@hadoop207 hbase-1.3.1]$ hdfs dfs
        -rmr /HBase/WALs/hadoop207.cevent.com,16020,1592703009107-splitting
         
        推荐使用:rmr: DEPRECATED: Please use 'rm -r' instead.
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14

      13.6创建表hive-beeline下

      
      
      
       
        
        CREATE TABLE hive_hbase_emp_table(
      
        empno int,
      
        ename string,
      
        job string,
      
        mgr int,
      
        hiredate string,
      
        sal double,
      
        comm double,
      
        deptno int)
      
        STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
      
        WITH SERDEPROPERTIES
        ("hbase.columns.mapping" =
        ":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
      
        TBLPROPERTIES ("hbase.table.name" =
        "hbase_emp_table");
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35

      13.7开启hbase.sh / hiveserver2 beeline

      
      
      
       
        
        [cevent@hadoop210 hive-1.2.1]$ start-dfs.sh 
        [cevent@hadoop210 hive-1.2.1]$ start-yarn.sh 
        [cevent@hadoop210 hive-1.2.1]$ zkServer.sh start
         
        [cevent@hadoop210 hbase-1.3.1]$ bin/start-hbase.sh 
        starting master, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-master-hadoop210.cevent.com.out
        hadoop210.cevent.com: starting
        regionserver, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop210.cevent.com.out
        hadoop211.cevent.com: starting
        regionserver, logging to /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop211.cevent.com.out
        hadoop212.cevent.com: starting
        regionserver, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop212.cevent.com.out
        [cevent@hadoop210 hbase-1.3.1]$ jps
        10394 Jps
        9192 DataNode
        9999 QuorumPeerMain
        10320 HRegionServer
        10174 HMaster
        9540 ResourceManager
        9051 NameNode
        9660 NodeManager
        9373 SecondaryNameNode
         
        [cevent@hadoop210 hive-1.2.1]$ bin/hiveserver2 
        OK
        [cevent@hadoop210 hive-1.2.1]$ bin/beeline 
        Beeline version 1.2.1 by Apache Hive
        beeline> !connect
        jdbc:hive2://hadoop210.cevent.com:10000
        Connecting to
        jdbc:hive2://hadoop210.cevent.com:10000
        Enter username for
        jdbc:hive2://hadoop210.cevent.com:10000: cevent
        Enter password for
        jdbc:hive2://hadoop210.cevent.com:10000: ******
        Connected to: Apache Hive (version 1.2.1)
        Driver: Hive JDBC (version 1.2.1)
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49

      14. 整合报错:Error: java.lang.UnsupportedClassVersionError:

      org/apache/hadoop/hive/hbase/HBaseStorageHandler : Unsupported major.minor
      version 52.0 (state=,code=0)

      
      
      
       
        
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> CREATE TABLE
        hive_hbase_emp_table(
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> empno int,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        ename string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> job string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> mgr int,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        hiredate string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> sal double,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> comm double,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> deptno int)
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        WITH SERDEPROPERTIES ("hbase.columns.mapping" =
        ":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> TBLPROPERTIES
        ("hbase.table.name" = "hbase_emp_table");
        Error: java.lang.UnsupportedClassVersionError:
        org/apache/hadoop/hive/hbase/HBaseStorageHandler : Unsupported major.minor
        version 52.0 (state=,code=0)
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39

      15.解决bug:

      1)官网下载源码

      链接:http://archive.apache.org/dist/hive/hive-1.2.1/
      apache-hive
      生成源码

      16.重新启动

      
      
      
       
        
        [cevent@hadoop210 hadoop-2.7.2]$ sbin/start-dfs.sh
        [cevent@hadoop210 hadoop-2.7.2]$ sbin/start-yarn.sh 
        [cevent@hadoop210 hadoop-2.7.2]$ zkServer.sh start
        [cevent@hadoop210 hadoop-2.7.2]$ jps
        4370 ResourceManager
        4488 NodeManager
        4841 QuorumPeerMain
        4965 Jps
        3448 NameNode
        3745 SecondaryNameNode
        3565 DataNode
         
        [cevent@hadoop210 hbase-1.3.1]$ bin/start-hbase.sh 
        starting master, logging to
        /opt/module/hbase-1.3.1/logs/hbase-cevent-master-hadoop210.cevent.com.out
        hadoop212.cevent.com: starting
        regionserver, logging to /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop212.cevent.com.out
        hadoop211.cevent.com: starting
        regionserver, logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop211.cevent.com.out
        hadoop210.cevent.com: starting regionserver,
        logging to
        /opt/module/hbase-1.3.1/bin/../logs/hbase-cevent-regionserver-hadoop210.cevent.com.out
        [cevent@hadoop210 hbase-1.3.1]$ jps
        4370 ResourceManager
        5115 HMaster
        4488 NodeManager
        4841 QuorumPeerMain
        3448 NameNode
        3745 SecondaryNameNode
        3565 DataNode
        5393 Jps
        5262 HRegionServer
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43

      17.启动hiveserver2和beeline插入数据

      
      
      
       
        
        [cevent@hadoop210 hive-1.2.1]$ ll
        总用量 532
        drwxrwxr-x. 3 cevent cevent   4096 6月  28 16:23 bin
        drwxrwxr-x. 2 cevent cevent   4096 6月  28 21:08 conf
        -rw-rw-r--. 1 cevent cevent  21061 6月  28 16:35 derby.log
        drwxrwxr-x. 4 cevent cevent   4096 6月  28 16:23 examples
        drwxrwxr-x. 7 cevent cevent   4096 6月  28 16:23 hcatalog
        drwxrwxr-x. 4 cevent cevent   4096 6月  29 09:24 lib
        -rw-rw-r--. 1 cevent cevent  24754 4月  30 2015 LICENSE
        drwxrwxr-x. 5 cevent cevent   4096 6月  28 16:35 metastore_db
        -rw-rw-r--. 1 cevent cevent    397 6月  19 2015 NOTICE
        -rw-rw-r--. 1 cevent cevent   4366 6月  19 2015 README.txt
        -rw-rw-r--. 1 cevent cevent 421129 6月  19 2015 RELEASE_NOTES.txt
        drwxrwxr-x. 3 cevent cevent   4096 6月  28 16:23 scripts
        -rw-rw-r--. 1 cevent cevent  24896 6月  28 22:32 zookeeper.out
        [cevent@hadoop210 hive-1.2.1]$ bin/hiveserver2 
         
        [cevent@hadoop210 hive-1.2.1]$ bin/beeline 
        Beeline version 1.2.1 by Apache Hive
         
        beeline> !connect
        jdbc:hive2://hadoop210.cevent.com:10000
        Connecting to
        jdbc:hive2://hadoop210.cevent.com:10000
        Enter username for
        jdbc:hive2://hadoop210.cevent.com:10000: cevent
        Enter password for
        jdbc:hive2://hadoop210.cevent.com:10000: ******
        Connected to: Apache Hive (version 1.2.1)
        Driver: Hive JDBC (version 1.2.1)
        Transaction isolation:
        TRANSACTION_REPEATABLE_READ
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> show tables;
        +-----------+--+
        | tab_name  |
        +-----------+--+
        | student   |
        +-----------+--+
        1 row selected (1.458 seconds)  创建hive表和hbase表
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> CREATE
        TABLE hive_hbase_emp_table(
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> empno
        int,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> ename
        string,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        job string,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> mgr int,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> hiredate
        string,
        0: jdbc:hive2://hadoop210.cevent.com:10000> sal double,
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        comm double,
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> deptno
        int)
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> STORED
        BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        
        WITH
        SERDEPROPERTIES 
        ("hbase.columns.mapping"
        =
        ":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
        0:
        jdbc:hive2://hadoop210.cevent.com:10000> TBLPROPERTIES
        ("hbase.table.name" = "hbase_emp_table");
        No rows affected (5.286 seconds)
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        show tables;
        +-----------------------+--+
        |      
        tab_name        |
        +-----------------------+--+
        | hive_hbase_emp_table  |
        | student               |
        +-----------------------+--+
        2 rows selected (0.057 seconds)
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        插入数据
        insert into table
        hive_hbase_emp_table values(2020,'cevent','developer',201,'2020-06-29',20000,2000,619);
        INFO 
        : Number of reduce tasks is set to 0 since there's no reduce operator
        INFO 
        : number of splits:1
        INFO 
        : Submitting tokens for job: job_1593393499225_0001
        INFO 
        : The url to track the job:
        http://hadoop210.cevent.com:8088/proxy/application_1593393499225_0001/
        INFO 
        : Starting Job = job_1593393499225_0001, Tracking URL =
        http://hadoop210.cevent.com:8088/proxy/application_1593393499225_0001/
        INFO 
        : Kill Command = /opt/module/hadoop-2.7.2/bin/hadoop job  -kill job_1593393499225_0001
        INFO 
        : Hadoop job information for Stage-0: number of mappers: 1; number of
        reducers: 0
        INFO 
        : 2020-06-29 09:33:36,570 Stage-0 map = 0%,  reduce = 0%
        INFO 
        : 2020-06-29 09:33:44,934 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 2.79 sec
        INFO 
        : MapReduce Total cumulative CPU time: 2 seconds 790 msec
        INFO 
        : Ended Job = job_1593393499225_0001
        No rows affected (37.702 seconds)
        0: jdbc:hive2://hadoop210.cevent.com:10000>
        select * from hive_hbase_emp_table; 查看hbase中清除数据
        +-----------------------------+-----------------------------+---------------------------+---------------------------+--------------------------------+---------------------------+----------------------------+------------------------------+--+
        | hive_hbase_emp_table.empno  |
        hive_hbase_emp_table.ename  |
        hive_hbase_emp_table.job  |
        hive_hbase_emp_table.mgr  |
        hive_hbase_emp_table.hiredate  |
        hive_hbase_emp_table.sal  |
        hive_hbase_emp_table.comm  |
        hive_hbase_emp_table.deptno  |
        +-----------------------------+-----------------------------+---------------------------+---------------------------+--------------------------------+---------------------------+----------------------------+------------------------------+--+
        +-----------------------------+-----------------------------+---------------------------+---------------------------+--------------------------------+---------------------------+----------------------------+------------------------------+--+
        No rows selected (1.009 seconds)
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      • 78
      • 79
      • 80
      • 81
      • 82
      • 83
      • 84
      • 85
      • 86
      • 87
      • 88
      • 89
      • 90
      • 91
      • 92
      • 93
      • 94
      • 95
      • 96
      • 97
      • 98
      • 99
      • 100
      • 101
      • 102
      • 103
      • 104
      • 105
      • 106
      • 107
      • 108
      • 109
      • 110
      • 111
      • 112
      • 113
      • 114
      • 115
      • 116
      • 117
      • 118
      • 119
      • 120
      • 121
      • 122
      • 123
      • 124
      • 125
      • 126
      • 127
      • 128
      • 129
      • 130
      • 131
      • 132
      • 133
      • 134
      • 135
      • 136
      • 137
      • 138
      • 139
      • 140

      18.Hbase查看并清除数据

      
      
      
       
        
        [cevent@hadoop210 hbase-1.3.1]$ hbase shell
        SLF4J: Class path contains multiple SLF4J
        bindings.
        SLF4J: Found binding in [jar:file:/opt/module/hbase-1.3.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in
        [jar:file:/opt/module/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: See
        http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
        SLF4J: Actual binding is of type
        [org.slf4j.impl.Log4jLoggerFactory]
        HBase Shell; enter 'help<RETURN>'
        for list of supported commands.
        Type "exit<RETURN>" to
        leave the HBase Shell
        Version 1.3.1,
        r930b9a55528fe45d8edce7af42fef2d35e77677a, Thu Apr  6 19:36:54 PDT 2017
         
        hbase(main):001:0> list
        TABLE                                                                                    
        
        hbase_emp_table                                                                           
        student                                                                                   
        2 row(s) in 0.2990 seconds
         
        => ["hbase_emp_table",
        "student"]
        hbase(main):002:0>  scan
        'hbase_emp_table'
        ROW                     COLUMN+CELL                                                      
        
        0 row(s) in 0.2450 seconds
         
        ROW                     COLUMN+CELL                                                      
        
        0 row(s) in 0.0070 seconds
         
        hbase(main):004:0> scan 'hbase_emp_table'
        ROW                     COLUMN+CELL                                                       
         2020                   column=info:comm,
        timestamp=1593394424573, value=2000.0           
         2020                   column=info:deptno,
        timestamp=1593394424573, value=619           
        
         2020                   column=info:ename,
        timestamp=1593394424573, value=cevent         
        
         2020                   column=info:hiredate,
        timestamp=1593394424573, value=2020-06-29  
        
         2020                   column=info:job,
        timestamp=1593394424573, value=developer         
         2020                   column=info:mgr,
        timestamp=1593394424573, value=201               
         2020                   column=info:sal,
        timestamp=1593394424573, value=20000.0   
               
        1 row(s) in 0.0540 seconds
         
        hbase(main):005:0> truncate 'hbase_emp_table' 
        清洗数据
        Truncating 'hbase_emp_table' table (it
        may take a while):
         -
        Disabling table...
         -
        Truncating table...
        0 row(s) in 7.7160 seconds
         
        
       
      
      
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
      • 45
      • 46
      • 47
      • 48
      • 49
      • 50
      • 51
      • 52
      • 53
      • 54
      • 55
      • 56
      • 57
      • 58
      • 59
      • 60
      • 61
      • 62
      • 63
      • 64
      • 65
      • 66
      • 67
      • 68
      • 69
      • 70
      • 71
      • 72
      • 73
      • 74
      • 75
      • 76
      • 77
      声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/262831
      推荐阅读
      相关标签
        

      闽ICP备14008679号