赞
踩
转载请注明出处:http://blog.csdn.net/l1028386804/article/details/45771619
- <configuration>
-
- <!-- global properties -->
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/home/liuyazhuang/tmp</value>
- </property>
-
- <!-- file system properties -->
- <property>
- <name>fs.default.name</name>
- <value>hdfs://localhost:9000</value>
- </property>
- </configuration>
hdfs-site.xml文件内容修改为下面内容:(replication默认为3,如果不修改,datanode少于三台就会报错)
- <configuration>
- <property>
- <name>fs.replication</name>
- <value>1</value>
- </property>
- </configuration>
mapred-site.xml文件内容修改为下面内容:
- <pre name="code" class="html">
- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>localhost:9001</value>
- </property>
- </configuration>
b、格式化Hadoop文件系统,在命令行输入命令: bin/hadoop namenode -format
多次格式化先删除 /home/liuyazhuang/tmp(此目录与core-site.xml中配置的目录相同)这个文件夹后在执行格式化操作,
c、启动Hadoop,在命令行输入命令: bin/start-all.sh
d、验证Hadoop是否安装成功,在浏览器中输入下面网址,如果正常打开说明安装成功。
http://localhost:50030 (mapreduce的web页面) http://localhost:50070 (hdfs的web页面)
(1)先在本地磁盘建立两个输入文件 file01和file02 $echo "Hello World Bye World" > file01 $echo "Hello Hadoop Goodbye Hadoop" > file02
(2)在hdfs中建立一个input目录:$hadoop fs -mkdir input
(3)将file01和file02拷贝到hdfs中: $hadoop fs -copyFromLocal /home/liuyazhuang/file0* input
(4)执行wordcount: $hadoop jar hadoop-0.20.2-examples.jar wordcount input output
(5)完成之后,查看结果 $hadoop fs -cat output/part-r-00000
export JAVA_HOME = /home/chuanqing/profile/jdk-6u13-linux-i586.zip_FILES/jdk1.6.0_13
export CLASSPATH = ".:$JAVA_HOME/lib:$CLASSPATH"
export PATH = "$JAVA_HOME/:PATH"
export HADOOP_INSTALL=/home/chuanqing/profile/hadoop-0.20.203.0
export PATH=$PATH:$HADOOP_INSTALL/bin
export HADOOP_INSTALL=/home/zhoulai/profile/hadoop-0.20.203.0
export PATH=$PATH:$HADOOP_INSTALL/bin
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。