当前位置:   article > 正文

hbase 配置(hbase-site.xml)和基本使用

hbase-site.xml

hbase 配置(hbase-site.xml)和基本使用

本文主要参考:http://www.cnblogs.com/ggjucheng/archive/2012/05/04/2483474.html

首先,在安装配置hbase之前,首先需要保证zookeeper能够正确的运行在每台机器上(zookeeper的配置可以参考这这篇文章:http://blog.csdn.net/wild46cat/article/details/53205548)。

当然,还是希望大家能够取看官方文档:http://hbase.apache.org/book.html#quickstart


下面上货:

1.把tar.gz解压到目录中。

2.修改配置文件hbase-site.xml

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://host1:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://host1:60000</value>
</property>

<property>
<name>hbase.zookeeper.quorum</name>
<value>host1,host2,host3</value>
<description>Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes of operation. For a fully-distributed setup, this should be set to a full list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on. </description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/zookeeper</value>
<description>Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored. </description>
</property>
</configuration>

3.修改配置文件hbase-env.sh增加JAVA_HOME

export JAVA_HOME=/usr/local/jdk1.8.0_111

4.修改regionservers

host2
host3

5.拷贝hbase文件夹到其他节点。

scp -r hbase/ user@host1:~/

6.首先需要启动hadoop

start-dfs.sh

start-yarn.sh


7.启动hbase

start-hbase.sh


====================下面为参考文章中的内容=========================

8 测试
1).登录hbase客户端

./bin/hbase shell

 

2).新建数据表,并插入3条记录


hbase(main):003:0> create 'test', 'cf' 
0 row(s) in 1.2200 seconds 
hbase(main):003:0> list 'table' 
test 
1 row(s) in 0.0550 seconds 
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1' 
0 row(s) in 0.0560 seconds 
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2' 
0 row(s) in 0.0370 seconds 
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3' 
0 row(s) in 0.0450 seconds

 

3).查看插入的数据

hbase(main):007:0> scan 'test' 
ROW COLUMN+CELL 
row1 column=cf:a, timestamp=1288380727188, value=value1 
row2 column=cf:b, timestamp=1288380738440, value=value2 
row3 column=cf:c, timestamp=1288380747365, value=value3 
3 row(s) in 0.0590 seconds

 

4).读取单条记录

hbase(main):008:0> get 'test', 'row1' 
COLUMN CELL 
cf:a timestamp=1288380727188, value=value1 
1 row(s) in 0.0400 seconds

 

5).停用并删除数据表

hbase(main):012:0> disable 'test' 
0 row(s) in 1.0930 seconds 
hbase(main):013:0> drop 'test' 
0 row(s) in 0.0770 seconds

 

6).退出

hbase(main):014:0> exit

下面是我自己的hbase运行效果(web)

http://192.168.1.221:16010/


注意:在配置hbase的时候,需要修改hadoop中的配置文件hdfs-site.xml

需要增加的内容是:

<property>
  <name>dfs.datanode.max.transfer.threads</name>
  <value>4096</value>
</property>

如果不增加的话,可能会出现问题(官网上是这样说的):


声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/666198
推荐阅读
相关标签
  

闽ICP备14008679号