赞
踩
基础的环境准备不在赘述,包括jdk安装,防火墙关闭,网络配置,环境变量的配置,各个节点之间进行免密等操作等。使用的版本2.0.5.
参考官方文档
分布式的部署,都是在单节点服务的基础配置好配置,直接分发到其他节点即可。
jdk路径的配置,以及不适用内部自带的zk.
export JAVA_HOME=/usr/java/default
export HBASE_MANAGES_ZK=false
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://muycluster/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node02,node03,node04</value>
</property>
</configuration>
配置集群regionserver的节点
node02
node03
node04
conf/backup-masters
vi backup-masters
node03
官方提供三种方式进行配置
Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbase-env.sh.
Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under ${HBASE_HOME}/conf, or
if only a small set of HDFS client configurations, add them to hbase-site.xml.
一般我们都选择第二种,直接将hadoop-site.xml配置拷贝到 ${HBASE_HOME}/conf即可
[root@node01 /]# start-hbase.sh SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/bigdata/hbase-2.0.5/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] running master, logging to /opt/bigdata/hbase-2.0.5/logs/hbase-root-master-node01.out SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/bigdata/hbase-2.0.5/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] node02: running regionserver, logging to /opt/bigdata/hbase-2.0.5/bin/../logs/hbase-root-regionserver-node02.out node04: running regionserver, logging to /opt/bigdata/hbase-2.0.5/bin/../logs/hbase-root-regionserver-node04.out node03: running regionserver, logging to /opt/bigdata/hbase-2.0.5/bin/../logs/hbase-root-regionserver-node03.out node04: running master, logging to /opt/bigdata/hbase-2.0.5/bin/../logs/hbase-root-master-node04.out
访问端口16010
:http://node01:16010/master-status
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.0.5</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-mapreduce</artifactId>
<version>2.0.5</version>
</dependency>
操作表Java API中主要提供了一个Admin对象进行表的 操作。
HBase schemas can be created or updated using the The Apache HBase Shell or by using Admin in the Java API.
Configuration conf = null; Connection conn = null; //表的管理对象 Admin admin = null; Table table = null; //创建表的对象 TableName tableName = TableName.valueOf("user"); @Before public void init() throws IOException { //创建配置文件对象 conf = HBaseConfiguration.create(); //加载zookeeper的配置 conf.set("hbase.zookeeper.quorum","node02,node03,node04"); //获取连接 conn = ConnectionFactory.createConnection(conf); //获取对象 admin = conn.getAdmin(); //获取数据操作对象 table = conn.getTable(tableName); }
/** * 创建表 主要使用Admin对象进行表的创建 * @throws IOException */ @Test public void createTable() throws IOException { //定义表描述对象 TableDescriptorBuilder tableDescriptorBuilder = TableDescriptorBuilder.newBuilder(tableName); //定义列族描述对象 ColumnFamilyDescriptorBuilder columnFamilyDescriptorBuilder = ColumnFamilyDescriptorBuilder.newBuilder("cf".getBytes()); //添加列族信息给表 tableDescriptorBuilder.setColumnFamily(columnFamilyDescriptorBuilder.build()); if(admin.tableExists(tableName)){ //禁用表 admin.disableTable(tableName); admin.deleteTable(tableName); } //创建表 admin.createTable(tableDescriptorBuilder.build()); }
@Test
public void insert() throws IOException {
Put put = new Put(Bytes.toBytes("row1"));
put.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("name"),Bytes.toBytes("elite"));
put.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("age"),Bytes.toBytes("22"));
put.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("address"),Bytes.toBytes("gz"));
table.put(put);
}
@Test public void get() throws IOException { Get get = new Get(Bytes.toBytes("row1")); //在服务端做数据过滤,挑选出符合需求的列 get.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("name")); get.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("age")); get.addColumn(Bytes.toBytes("cf"),Bytes.toBytes("address")); Result result = table.get(get); Cell cell1 = result.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("name")); Cell cell2 = result.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("age")); Cell cell3 = result.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("address")); System.out.print(Bytes.toString(CellUtil.cloneValue(cell1))+" "); System.out.print(Bytes.toString(CellUtil.cloneValue(cell2))+" "); System.out.print(Bytes.toString(CellUtil.cloneValue(cell3))); }
/** * 获取表中所有的记录 */ @Test public void scan() throws IOException { Scan scan = new Scan(); ResultScanner rss = table.getScanner(scan); for (Result rs: rss) { Cell cell1 = rs.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("name")); Cell cell2 = rs.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("age")); Cell cell3 = rs.getColumnLatestCell(Bytes.toBytes("cf"),Bytes.toBytes("address")); System.out.print(Bytes.toString(CellUtil.cloneValue(cell1))+" "); System.out.print(Bytes.toString(CellUtil.cloneValue(cell2))+" "); System.out.println(Bytes.toString(CellUtil.cloneValue(cell3))); } }
/**
* 删除数据
* @throws IOException
*/
@Test
public void delete() throws IOException {
Delete delete = new Delete("row2".getBytes());
table.delete(delete);
}
@After public void close(){ try { table.close(); } catch (IOException e) { e.printStackTrace(); } try { admin.close(); } catch (IOException e) { e.printStackTrace(); } try { conn.close(); } catch (IOException e) { e.printStackTrace(); } }
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。