赞
踩
测试环境:centos6.5 + jdk1.8.0_131/
1.hadoop 2.8.0安装
1)下载hadoop-2.8.0
2)解压缩到/opt/app/hadoop-2.8.0目录下
3)伪分布配置文件如下(伪分布下一定要用localhost):
vi /opt/app/hadoop-2.8.0/etc/hadoop/core-site.xml
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://localhost:9000</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/home/hadoop/data</value>
- <description>namenode</description>
- </property>
- </configuration>
vi /opt/app/hadoop-2.8.0/etc/hadoop/hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>/usr/hadoopdata/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>/usr/hadoopdata/data</value>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>localhost:50090</value>
- </property>
- </configuration>

vi /opt/app/hadoop-2.8.0/etc/hadoop/yarn-site.xml
- <configuration>
-
- <!-- Site specific YARN configuration properties -->
-
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
- <property>
- <name>mapreduce.jobtracker.staging.root.dir</name>
- <value>/home/hadoop/data/mapred/staging</value>
- </property>
- <property>
- <name>yarn.app.mapreduce.am.staging-dir</name>
- <value>/home/hadoop/data/mapred/staging</value>
- </property>
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>localhost</value>
- </property>
- </configuration>

vi /opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml
- <configuration>
- <property>
- <name>mapreduce.reduce.java.opts</name>
- <value>-Xms2000m -Xmx4600m</value>
- </property>
- <property>
- <name>mapreduce.map.memory.mb</name>
- <value>5120</value>
- </property>
- <property>
- <name>mapreduce.reduce.input.buffer.percent</name>
- <value>0.5</value>
- </property>
- <property>
- <name>mapreduce.reduce.memory.mb</name>
- <value>2048</value>
- </property>
- <property>
- <name>mapred.tasktracker.reduce.tasks.maximum</name>
- <value>2</value>
- </property>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>localhost:10020</value>
- </property>
- <property>
- <name>yarn.app.mapreduce.am.staging-dir</name>
- <value>/home/hadoop/data/mapred/staging</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.intermediate-done-dir</name>
- <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.done-dir</name>
- <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>
- </property>
- </configuration>
-

export JAVA_HOME=/opt/app/jdk1.8.0_131
1)下载hbase1.3.1
2)解压缩到/opt/app/hbase-1.3.1下
3)伪分布下,配置文件如下(zookeeper用hbase自带):
vi /opt/app/hbase-1.3.1/conf/hbase-site.xml
- <configuration>
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://localhost:9000/hbase</value>
- </property>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>localhost</value>
- </property>
- </configuration>
vi /opt/app/hbase-1.3.1/conf/regionservers
localhost
3.安装mysql
http://blog.csdn.net/cuker919/article/details/46481427
需要注意的是:
1)先卸载在带的mysql
2)最好将mysql安装在/usr/local/mysql目录下,否则有很多不必要的麻烦要做
3)建立hive用户,便于hive使用mysql数据库
4.安装hive1.2.1
1)下载hive1.2.1
2)解压缩 /opt/app/apache-hive-1.2.1-bin/下
3)主要配置文件如下:(注意拷贝mysql驱动到hive lib报下)
vi /opt/app/apache-hive-1.2.1-bin/conf/
- export JAVA_HOME=/opt/app/jdk1.8.0_131
- export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin/
- export HBASE_HOME=/opt/app/hbase-1.3.1
- export HIVE_AUX_JARS_PATH=/opt/app/apache-hive-1.2.1-bin/lib
- export HIVE_CLASSPATH==/opt/app/apache-hive-1.2.1-bin/conf
片段: vi /opt/app/apache-hive-1.2.1-bin/conf/hive-site.xml
- <property>
- <name>javax.jdo.option.ConnectionDriverName</name>
- <value>com.mysql.jdbc.Driver</value>
- <description>Driver class name for a JDBC metastore</description>
- </property>
-
-
- <property>
- <name>javax.jdo.option.ConnectionUserName</name>
- <value>hive</value>
- <description>username to use against metastore database</description>
- </property>
-
-
- <property>
- <name>javax.jdo.option.ConnectionPassword</name>
- <value>hive</value>
- <description>password to use against metastore database</description>
- </property>
-
-
- <property>
- <name>hive.exec.scratchdir</name>
- <value>/tmp/hive</value>
- </property>
- <property>
- <name>hive.exec.local.scratchdir</name>
- <value>/home/hive/iotmp</value>
- </property>
- <property>
- <name>hive.downloaded.resources.dir</name>
- <value>/home/hive/iotmp</value>
- </property>
- <property>
- <name>hive.metastore.warehouse.dir</name>
- <value>/user/hive/warehouse</value>
- </property>
- <property>
- <name>hive.querylog.location</name>
- <value>/home/hive/iotmp</value>
- <description>Location of Hive run time structured log file</description>
- </property>
- <property>
- <name>hive.metastore.uris</name>
- <value>thrift://localhost:9083</value>
- </property>
- <property>
- <name>javax.jdo.option.ConnectionURL</name>
- <value>jdbc:mysql://localhost:3306/hive</value>
- <description>JDBC connect string for a JDBC metastore</description>
- </property>

4)需要将hive lib传到 hdfs上
hadoop fs -put /opt/app/apache-hive-1.2.1-bin/lib/* /opt/app/apache-hive-1.2.1-bin/lib/
5.安装Kylin2.0
1) 下载apache-kylin-2.0.0-bin
2) /opt/app/apache-kylin-2.0.0-bin/ 下
3)主要配置目录如下:
vi /opt/app/apache-kylin-2.0.0-bin/bin/find-hive-dependency.sh
- hive_conf_path=$HIVE_HOME/conf
- hive_exec_path=$HIVE_HOME/lib/hive-exec-1.2.1.jar
vi /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh
修改HBASE_CLASSPATH_PREFIX,增加hive_dependency
export HBASE_CLASSPATH_PREFIX=${KYLIN_HOME}/conf:${KYLIN_HOME}/lib/*:${KYLIN_HOME}/ext/*:${hive_dependency}:${HBASE_CLASSPATH_PREFIX}
6.配置vi /etc/profile
source /etc/profile使生效
## set java export JAVA_HOME=/opt/app/jdk1.8.0_131 PATH=$PATH:/$JAVA_HOME/bin CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:/opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml JRE_HOME=$JAVA_HOME/jre export HADOOP_HOME=/opt/app/hadoop-2.8.0 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export PATH=$PATH:$HADOOP_HOME/lib export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin export HCAT_HOME=$HIVE_HOME/hcatalog export HIVE_CONF=$HIVE_HOME/conf PATH=$PATH:$HIVE_HOME/bin:$PATH export HBASE_HOME=/opt/app/hbase-1.3.1 PATH=$PATH:$HBASE_HOME/bin:$PATH #export HIVE_CONF=/opt/app/apache-hive-1.2.1-bin/conf #PATH=$PATH:$HIVE_HOME/bin:$PATH export KYLIN_HOME=/opt/app/apache-kylin-2.0.0-bin PATH=$PATH:$KYLIN_HOME/bin:$PATH #export KYLIN_HOME=/opt/app/kylin/
7.配置vi /etc/profile
首先查看hostname
[root@CentOS65x64 mysql]# hostname
CentOS65x64.localdomain
将hostname(CentOS65x64.localdomain) 与127.0.0.1 映射,否则伪分布下,zookeeper可能启动不起来
8.配置完成,启动
- service mysql start
- /opt/app/hadoop-2.8.0/sbin/start-all.sh
- /opt/app/hadoop-2.8.0/sbin/mr-jobhistory-daemon.sh start historyserver
- /opt/app/hbase-1.3.1/bin/start-hbase.sh
- nohup hive --service metastore > /home/hive/metastore.log 2>&1 &
- /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh start
9.关闭
- /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh stop
- /opt/app/hadoop-2.8.0/sbin/stop-all.sh
- /opt/app/hbase-1.3.1/bin/stop-hbase.sh
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。