赞
踩
一、Hbase安装
1、从官网下载最新版本Hbase安装包1.2.3,为了省去编译安装环节,我直接下载了hbase-1.2.3-bin.tar.gz,解压即可使用。(如果此链接下载速度过慢可更换官网其他下载链接)
[hadoop@master tar]$ tar -xzf hbase-1.2.3-bin.tar.gz
[hadoop@mastertar]$ mv hbase-1.2.3 /usr/local/hadoop/hbase
[hadoop@mastertar]$ cd /usr/local/hadoop/hbase/[hadoop@master hbase]$ ./bin/hbase version
HBase1.2.3Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git.commit revision=bd63744624a26dc3350137b564fe746df7a721a4
Compiled by stack on Mon Aug 29 15:13:42 PDT 2016From source with checksum 0ca49367ef6c3a680888bbc4f1485d18
运行上面命令得到正常输出即表示安装成功,然后配置环境变量
2、配置环境变量
修改~/.bashrc在PATH后面增加
:$HADOOP_HOME/hbase/bin
则~/.bashrc文件内容如下
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/hbase/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
[hadoop@master hadoop]$ source ~/.bashrc
二、Hbase单机模式
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/修改为
export JAVA_HOME=/usr/local/java/#export HBASE_MANAGES_ZK=true修改为
export HBASE_MANAGES_ZK=true# 添加下面一行
export HBASE_SSH_OPTS="-p 322"
2、修改配置文件 hbase/conf/hbase-site.xml
hbase.rootdir
file:/usr/local/hadoop/tmp/hbase/hbase-tmp
3、启动 Hbase
[hadoop@master hbase]$ start-hbase.shstarting master, logging to/usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.out
Java HotSpot(TM)64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0Java HotSpot(TM)64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
jps下多了一个HMaster进程
[hadoop@master hbase]$ jps12178ResourceManager11540NameNode4277Jps11943SecondaryNameNode12312NodeManager11707DataNode3933 HMaster
4、使用Hbase shell
[hadoop@master hbase]$ hbase shell2016-11-07 10:11:02,187 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found bindingin [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found bindingin [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter'help' forlist of supported commands.
Type"exit"to leave the HBase Shell
Version1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016hbase(main):001:0>status1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000average load
hbase(main):002:0> exit
未启动Hbase直接使用Hbase Shell会报错
5、停止Hbase
[hadoop@master hbase]$ stop-hbase.shstopping hbase......................
三、Hbase伪分布式
伪分布式和单机模式的区别主要是配置文件的不同
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/修改为
export JAVA_HOME=/usr/local/java/# export HBASE_MANAGES_ZK=true修改为
export HBASE_MANAGES_ZK=true# export HBASE_CLASSPATH=修改为
export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop/# 添加下面一行
export HBASE_SSH_OPTS="-p 322"
zookeeper使用Hbase自带的即可,分布式才有必要开启独立的
2、修改配置文件 hbase/conf/hbase-site.xml
hbase.rootdir
hdfs://10.1.2.108:9000/hbase
hbase.cluster.distributed
true
注意这里的hbase.rootdir设置为hdfs的存储路径前提是hadoop平台是伪分布式,只有一个NameNode
3、启动Hbase
[hadoop@master hbase]$ start-hbase.shlocalhost: starting zookeeper, logging to/usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.out
master running as process3933. Stop it first.
starting regionserver, logging to/usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-1-regionserver-master.out
jps查看进程多了 HMaster和 HRegionServer
[hadoop@master hbase]$ jps7312Jps12178ResourceManager11540NameNode11943SecondaryNameNode12312NodeManager11707DataNode3933HMaster7151 HRegionServer
4、使用Hbase Shell
[hadoop@master hbase]$ hbase shell2016-11-07 10:35:05,262 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found bindingin [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found bindingin [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter'help' forlist of supported commands.
Type"exit"to leave the HBase Shell
Version1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
1) 查看集群状态和版本信息
hbase(main):001:0>status1 active master, 0 backup masters, 1 servers, 0 dead, 1.0000average load
hbase(main):002:0>version1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
2) 创建user表和三个列族
hbase(main):003:0> create 'user','user_id','address','info'
0 row(s) in 2.3570seconds=> Hbase::Table - user
3) 查看所有表
hbase(main):005:0> create 'tmp', 't1', 't2'
0 row(s) in 1.2320seconds=> Hbase::Table -tmp
hbase(main):006:0>list
TABLE
tmp
user2 row(s) in 0.0100seconds=> ["tmp", "user"]
hbase(main):007:0>
4) 查看表结构
hbase(main):008:0> describe 'user'Table user is ENABLED
user
COLUMN FAMILIES DESCRIPTION
{NAME=> 'address', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V
ERSIONS=> '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME=> 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS
IONS=> '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME=> 'user_id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V
ERSIONS=> '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}3 row(s) in 0.2060seconds
hbase(main):009:0>
5) 删除表
hbase(main):010:0> disable 'tmp'
0 row(s) in 2.2580seconds
hbase(main):011:0> drop 'tmp'
0 row(s) in 1.2560seconds
hbase(main):012:0>
5、停止Hbase
[hadoop@master hbase]$ stop-hbase.shstopping hbase......................
localhost: no zookeeper to stop because no pidfile /tmp/hbase-hadoop-zookeeper.pid
停止Hadoop的顺序是停止hbase、停止YARN、停止Hdfs
6、web使用
或者直接访问 http://10.1.2.108:60010/master.jsp
HBase 的详细介绍:请点这里
HBase 的下载地址:请点这里
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。