当前位置:   article > 正文

伪分布安装Hadoop2.8.0+Hbase1.3.1+Hive1.2.1+Kylin2.0_搭建hadoop、hbase的伪分布

搭建hadoop、hbase的伪分布

测试环境:centos6.5 + jdk1.8.0_131/

 1.hadoop 2.8.0安装

1)下载hadoop-2.8.0

2)解压缩到/opt/app/hadoop-2.8.0目录下

3)伪分布配置文件如下(伪分布下一定要用localhost):

vi  /opt/app/hadoop-2.8.0/etc/hadoop/core-site.xml 

  1. <configuration>
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://localhost:9000</value>
  5. </property>
  6. <property>
  7. <name>hadoop.tmp.dir</name>
  8. <value>/home/hadoop/data</value>
  9. <description>namenode</description>
  10. </property>
  11. </configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/hdfs-site.xml
  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>1</value>
  5. </property>
  6. <property>
  7. <name>dfs.namenode.name.dir</name>
  8. <value>/usr/hadoopdata/name</value>
  9. </property>
  10. <property>
  11. <name>dfs.datanode.data.dir</name>
  12. <value>/usr/hadoopdata/data</value>
  13. </property>
  14. <property>
  15. <name>dfs.namenode.secondary.http-address</name>
  16. <value>localhost:50090</value>
  17. </property>
  18. </configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/yarn-site.xml
  1. <configuration>
  2. <!-- Site specific YARN configuration properties -->
  3. <property>
  4. <name>yarn.nodemanager.aux-services</name>
  5. <value>mapreduce_shuffle</value>
  6. </property>
  7. <property>
  8. <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
  9. <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  10. </property>
  11. <property>
  12. <name>mapreduce.jobtracker.staging.root.dir</name>
  13. <value>/home/hadoop/data/mapred/staging</value>
  14. </property>
  15. <property>
  16. <name>yarn.app.mapreduce.am.staging-dir</name>
  17. <value>/home/hadoop/data/mapred/staging</value>
  18. </property>
  19. <property>
  20. <name>yarn.resourcemanager.hostname</name>
  21. <value>localhost</value>
  22. </property>
  23. </configuration>
 vi  /opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml

  1. <configuration>
  2. <property>
  3. <name>mapreduce.reduce.java.opts</name>
  4. <value>-Xms2000m -Xmx4600m</value>
  5. </property>
  6. <property>
  7. <name>mapreduce.map.memory.mb</name>
  8. <value>5120</value>
  9. </property>
  10. <property>
  11. <name>mapreduce.reduce.input.buffer.percent</name>
  12. <value>0.5</value>
  13. </property>
  14. <property>
  15. <name>mapreduce.reduce.memory.mb</name>
  16. <value>2048</value>
  17. </property>
  18. <property>
  19. <name>mapred.tasktracker.reduce.tasks.maximum</name>
  20. <value>2</value>
  21. </property>
  22. <property>
  23. <name>mapreduce.framework.name</name>
  24. <value>yarn</value>
  25. </property>
  26. <property>
  27. <name>mapreduce.jobhistory.address</name>
  28. <value>localhost:10020</value>
  29. </property>
  30. <property>
  31. <name>yarn.app.mapreduce.am.staging-dir</name>
  32. <value>/home/hadoop/data/mapred/staging</value>
  33. </property>
  34. <property>
  35. <name>mapreduce.jobhistory.intermediate-done-dir</name>
  36. <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
  37. </property>
  38. <property>
  39. <name>mapreduce.jobhistory.done-dir</name>
  40. <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
  41. </property>
  42. </configuration>

vi  /opt/app/hadoop-2.8.0/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/opt/app/jdk1.8.0_131

 2.hbase1.3.1安装

1)下载hbase1.3.1

2)解压缩到/opt/app/hbase-1.3.1下

3)伪分布下,配置文件如下(zookeeper用hbase自带):

 vi  /opt/app/hbase-1.3.1/conf/hbase-site.xml

  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://localhost:9000/hbase</value>
  5. </property>
  6. <property>
  7. <name>hbase.cluster.distributed</name>
  8. <value>true</value>
  9. </property>
  10. <property>
  11. <name>hbase.zookeeper.quorum</name>
  12. <value>localhost</value>
  13. </property>
  14. </configuration>
 vi  /opt/app/hbase-1.3.1/conf/regionservers

localhost
3.安装mysql
安装步骤略:可参考如下文章:

http://blog.csdn.net/cuker919/article/details/46481427

需要注意的是:

1)先卸载在带的mysql

2)最好将mysql安装在/usr/local/mysql目录下,否则有很多不必要的麻烦要做

3)建立hive用户,便于hive使用mysql数据库

4.安装hive1.2.1

1)下载hive1.2.1

2)解压缩 /opt/app/apache-hive-1.2.1-bin/下

3)主要配置文件如下:(注意拷贝mysql驱动到hive lib报下)

 vi /opt/app/apache-hive-1.2.1-bin/conf/

  1. export JAVA_HOME=/opt/app/jdk1.8.0_131
  2. export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin/
  3. export HBASE_HOME=/opt/app/hbase-1.3.1
  4. export HIVE_AUX_JARS_PATH=/opt/app/apache-hive-1.2.1-bin/lib
  5. export HIVE_CLASSPATH==/opt/app/apache-hive-1.2.1-bin/conf
片段: vi /opt/app/apache-hive-1.2.1-bin/conf/hive-site.xml 

  1. <property>
  2.   <name>javax.jdo.option.ConnectionDriverName</name>
  3.   <value>com.mysql.jdbc.Driver</value>
  4.   <description>Driver class name for a JDBC metastore</description>
  5. </property>
  6. <property>
  7.   <name>javax.jdo.option.ConnectionUserName</name>
  8.   <value>hive</value>
  9.   <description>username to use against metastore database</description>
  10. </property>
  11. <property>
  12.   <name>javax.jdo.option.ConnectionPassword</name>
  13.   <value>hive</value>
  14.   <description>password to use against metastore database</description>
  15. </property>
  16. <property>
  17.     <name>hive.exec.scratchdir</name>
  18.     <value>/tmp/hive</value>    
  19. </property>  
  20. <property>    
  21.     <name>hive.exec.local.scratchdir</name>
  22.     <value>/home/hive/iotmp</value>   
  23. </property>  
  24. <property>    
  25.     <name>hive.downloaded.resources.dir</name>
  26.     <value>/home/hive/iotmp</value>
  27. </property> 
  28. <property>
  29.     <name>hive.metastore.warehouse.dir</name>
  30.     <value>/user/hive/warehouse</value>
  31. </property>
  32.   <property>
  33.     <name>hive.querylog.location</name>
  34.     <value>/home/hive/iotmp</value>
  35.     <description>Location of Hive run time structured log file</description>
  36.   </property>
  37. <property>
  38.     <name>hive.metastore.uris</name>
  39.     <value>thrift://localhost:9083</value>
  40. </property>
  41.   <property>
  42.     <name>javax.jdo.option.ConnectionURL</name>
  43.     <value>jdbc:mysql://localhost:3306/hive</value>
  44.     <description>JDBC connect string for a JDBC metastore</description>
  45.   </property>
4)需要将hive lib传到 hdfs上 

 hadoop fs -put /opt/app/apache-hive-1.2.1-bin/lib/* /opt/app/apache-hive-1.2.1-bin/lib/

5.安装Kylin2.0
1) 下载apache-kylin-2.0.0-bin

2) /opt/app/apache-kylin-2.0.0-bin/ 下

3)主要配置目录如下:

vi /opt/app/apache-kylin-2.0.0-bin/bin/find-hive-dependency.sh 

  1. hive_conf_path=$HIVE_HOME/conf
  2. hive_exec_path=$HIVE_HOME/lib/hive-exec-1.2.1.jar
vi /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh

修改HBASE_CLASSPATH_PREFIX,增加hive_dependency

 export HBASE_CLASSPATH_PREFIX=${KYLIN_HOME}/conf:${KYLIN_HOME}/lib/*:${KYLIN_HOME}/ext/*:${hive_dependency}:${HBASE_CLASSPATH_PREFIX}
6.配置vi /etc/profile

  1. ## set java
  2. export JAVA_HOME=/opt/app/jdk1.8.0_131
  3. PATH=$PATH:/$JAVA_HOME/bin
  4. CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:/opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml
  5. JRE_HOME=$JAVA_HOME/jre
  6. export HADOOP_HOME=/opt/app/hadoop-2.8.0
  7. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  8. export PATH=$PATH:$HADOOP_HOME/lib
  9. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
  10. export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin
  11. export HCAT_HOME=$HIVE_HOME/hcatalog
  12. export HIVE_CONF=$HIVE_HOME/conf
  13. PATH=$PATH:$HIVE_HOME/bin:$PATH
  14. export HBASE_HOME=/opt/app/hbase-1.3.1
  15. PATH=$PATH:$HBASE_HOME/bin:$PATH
  16. #export HIVE_CONF=/opt/app/apache-hive-1.2.1-bin/conf
  17. #PATH=$PATH:$HIVE_HOME/bin:$PATH
  18. export KYLIN_HOME=/opt/app/apache-kylin-2.0.0-bin
  19. PATH=$PATH:$KYLIN_HOME/bin:$PATH
  20. #export KYLIN_HOME=/opt/app/kylin/
source /etc/profile使生效

7.配置vi /etc/profile

首先查看hostname

[root@CentOS65x64 mysql]# hostname
CentOS65x64.localdomain


将hostname(CentOS65x64.localdomain) 与127.0.0.1 映射,否则伪分布下,zookeeper可能启动不起来


8.配置完成,启动

  1. service mysql start
  2. /opt/app/hadoop-2.8.0/sbin/start-all.sh
  3. /opt/app/hadoop-2.8.0/sbin/mr-jobhistory-daemon.sh start historyserver
  4. /opt/app/hbase-1.3.1/bin/start-hbase.sh
  5. nohup hive --service metastore > /home/hive/metastore.log 2>&1 &
  6. /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh start

注意红色字部分,不要忘记启动,负责kylin执行cube会报找不到hive-meta-1.2.1.jar错误。


9.关闭

  1. /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh stop
  2. /opt/app/hadoop-2.8.0/sbin/stop-all.sh
  3. /opt/app/hbase-1.3.1/bin/stop-hbase.sh

其他的应用用jps查看,用 kill -9 进程号杀死。

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号