当前位置:   article > 正文

Windows环境部署Hadoop-3.3.2和Spark3.3.2_winutils3.3.2

winutils3.3.2

目录

一、Windows环境部署Hadoop-3.3.2

1.CMD管理员解压Hadoop压缩包

2.配置系统环境变量

3.下载hadoop winutils文件

4.修改D:\server\hadoop-3.3.2\etc\hadoop目录下的配置文件

(1)core-site.xml

(2)hdfs-site.xml

(3)mapred-site.xml

(4)yarn-site.xml

(5)workers

(6)hadoop-env.cmd

5.初始化hadoop

6.启动Hadoop

7.进入浏览器查看

二、Windows环境部署Spark-3.3.2

1.下载压缩包

2.解压后配置环境变量

3.进入spark-shell

4.进入浏览器


一、Windows环境部署Hadoop-3.3.2

1.CMD管理员解压Hadoop压缩包

不可以直接用winRAR,会报错

 

输入命令

  1. start winrar x -y 压缩包 压缩路径
  2. 例如,将xx.tar.gz解压到当前目录
  3. cd xxx // 进入到xx.tar.gz目录下
  4. start winrar x -y xx.tar.gz ./ // 即可
  5. ##############################################
  6. start winrar x -y hadoop-3.3.2.tar.gz ./

2.配置系统环境变量

PATH中添加

3.下载hadoop winutils文件

下载链接:https://github.com/cdarlint/winutils

选择hadoop3.2.2的bin包即可

下载并解压后,将winutils里面的bin目录中所有的文件复制到hadoop-3.3.2/bin目录下,注意不要直接替换整个bin目录,是把bin下的文件复制过去

4.修改D:\server\hadoop-3.3.2\etc\hadoop目录下的配置文件

(1)core-site.xml
  1. <configuration>
  2. <property>
  3. <name>hadoop.tmp.dir</name>
  4. <value>/D:/server/hadoop-3.3.2/data/tmp</value>
  5. </property>
  6. <property>
  7. <name>fs.defaultFS</name>
  8. <value>hdfs://localhost:9000</value>
  9. </property>
  10. </configuration>
(2)hdfs-site.xml
  1. <configuration>
  2. <!-- 这个参数设置为1,因为是单机版hadoop -->
  3. <property>
  4. <name>dfs.replication</name>
  5. <value>1</value>
  6. </property>
  7. <property>
  8. <name>dfs.namenode.name.dir</name>
  9. <value>/D:/server/hadoop-3.3.2/data/namenode</value>
  10. </property>
  11. <property>
  12. <name>dfs.datanode.data.dir</name>
  13. <value>/D:/server/hadoop-3.3.2/data/datanode</value>
  14. </property>
  15. </configuration>
(3)mapred-site.xml
  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. <property>
  7. <name>mapred.job.tracker</name>
  8. <value>hdfs://localhost:9001</value>
  9. </property>
  10. </configuration>
(4)yarn-site.xml
  1. <configuration>
  2. <property>
  3. <name>yarn.nodemanager.aux-services</name>
  4. <value>mapreduce_shuffle</value>
  5. </property>
  6. <property>
  7. <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  8. <value>org.apache.hahoop.mapred.ShuffleHandler</value>
  9. </property>
  10. </configuration>
(5)workers
localhost
(6)hadoop-env.cmd
  1. # 大约在24行左右
  2. @rem The java implementation to use. Required.
  3. set JAVA_HOME=C:\PROGRA~1\Java\jdk1.8.0_333
  4. # 注意java目录要改成PROGRA~1的位置
  5. # 大约在在最后一行
  6. set HADOOP_IDENT_STRING=%"USERNAME"%

5.初始化hadoop

管理员运行CMD

  1. Microsoft Windows [版本 10.0.19045.4046]
  2. (c) Microsoft Corporation。保留所有权利。
  3. C:\WINDOWS\system32>D:
  4. D:\>cd server\hadoop-3.3.2
  5. D:\server\hadoop-3.3.2>hadoop version
  6. Hadoop 3.3.2
  7. Source code repository git@github.com:apache/hadoop.git -r 0bcb014209e219273cb6fd4152df7df713cbac61
  8. Compiled by chao on 2022-02-21T18:39Z
  9. Compiled with protoc 3.7.1
  10. From source with checksum 4b40fff8bb27201ba07b6fa5651217fb
  11. This command was run using /D:/server/hadoop-3.3.2/share/hadoop/common/hadoop-common-3.3.2.jar
  12. D:\server\hadoop-3.3.2>hdfs namenode -format

6.启动Hadoop

  1. D:\server\hadoop-3.3.2>cd sbin
  2. D:\server\hadoop-3.3.2\sbin>start-all.cmd
  3. This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
  4. starting yarn daemons

会出现4个窗口

7.进入浏览器查看

localhost:9870

localhost:8088

二、Windows环境部署Spark-3.3.2

1.下载压缩包

Index of /dist/spark/spark-3.3.2

 

2.解压后配置环境变量

解压命令和上面的解压hadoop命令一样

配置环境变量:

PATH路径添加%SPARK_HOME%\bin

3.进入spark-shell

4.进入浏览器

localhost:4040

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/634416
推荐阅读
相关标签
  

闽ICP备14008679号