当前位置:   article > 正文

Hadoop Spark 安装_192.168.2.203

192.168.2.203

Hadoop Spark 安装

环境准备:

虚拟机信息:

192.168.2.201 hadoop001

192.168.2.202 hadoop002

192.168.2.203 hadoop003

1、安装JDK(注意下面的tips,安装命令懒得改了)

所有节点执行,安装包下载地址:

https://www.oracle.com/java/technologies/downloads/

安装命令

mkdir -p /usr/java
tar -zxvf jdk-17_linux-x64_bin.tar.gz -C /usr/java/
  • 1
  • 2

在 /etc/profile中添加如下代码,并执行source /etc/profile生效

export JAVA_HOME=/usr/java/jdk-17
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
  • 1
  • 2
  • 3
  • 4

2、节点之间免密与关闭selinux

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop003
  • 1
  • 2
  • 3
  • 4

关闭selinux

sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
reboot
  • 1
  • 2

3、安装Hadoop

1.下载Hadoop

下载地址:

https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz

解压:

tar -zxvf hadoop-3.3.1.tar.gz  -C /usr/local/
  • 1

配置环境变量并 source /etc/profile 立刻生效:

export HADOOP_HOME=/usr/local/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME:/sbin
  • 1
  • 2

以上在所有节点执行

2.在master节点进行hadoop相关配置

配置文件在解压路径下etc文件夹内 /usr/local/hadoop-3.3.1/etc/hadoop

hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_301
export HDFS_SECONDARYNAMENODE_USER=root
export HADOOP_SHELL_EXECNAME=root
export HDFS_DATANODE_USER=root
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

core-site.xml

<configuration>
    <property>
        <!--指定 namenode 的 hdfs 协议文件系统的通信地址-->
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop001:9000</value>
    </property>
    <property>
        <!--指定 hadoop 集群存储临时文件的目录-->
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

hdfs-site.xml

<configuration>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop001:50090</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <!--namenode 节点数据(即元数据)的存放位置,可以指定多个目录实现容错,多个目录用逗号分隔-->
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/hadoop/namenode/data</value>
    <final>true</final>
  </property>
  <property>
    <!--datanode 节点数据(即数据块)的存放位置-->
    <name>dfs.datanode.data.dir</name>
    <value>file:/home/hadoop/datanode/data</value>
    <final>true</final>
  </property>
  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
  </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

mapred-site.xml

<configuration>
    <property>
        <!--指定 mapreduce 作业运行在 yarn 上-->
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
   <property>
       <name>mapreduce.jobhistory.address</name>
       <value>master:10020</value>
   </property>
   <property>
       <name>mapreduce.jobhistory.webapp.address</name>
       <value>master:19888</value>
   </property>
   <property>
       <name>mapreduce.application.classpath</name>
       <value>
           /usr/local/hadoop-3.3.1/etc/hadoop,
           /usr/local/hadoop-3.3.1/share/hadoop/common/*,
           /usr/local/hadoop-3.3.1/share/hadoop/common/lib/*,
           /usr/local/hadoop-3.3.1/share/hadoop/hdfs/*,
           /usr/local/hadoop-3.3.1/share/hadoop/hdfs/lib/*,
           /usr/local/hadoop-3.3.1/share/hadoop/mapreduce/*,
           /usr/local/hadoop-3.3.1/share/hadoop/mapreduce/lib/*,
           /usr/local/hadoop-3.3.1/share/hadoop/yarn/*,
           /usr/local/hadoop-3.3.1/share/hadoop/yarn/lib/*
       </value>
   </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

yarn-site.xml

<configuration>
    <property>
        <!--配置 NodeManager 上运行的附属服务。需要配置成 mapreduce_shuffle 后才可以在 Yarn 上运行 MapReduce 程序。-->
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop001:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop001:8030</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop001:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>hadoop001:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>hadoop001:8088</value>
    </property>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

workers

hadoop001
hadoop002
hadoop003
  • 1
  • 2
  • 3
3.格式化namenode
hdfs namenode -format
  • 1
4.启动hadoop

在sbin下执行

./start-all.sh
  • 1
tips:
深坑1: jdk版本只能是jdk8,jdk8 目前已经为商业版。

否则yarn会出现如下报错

2021-09-28 09:06:08,285 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
	at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
	at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
	at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
	at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
	at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
	at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
	at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
	at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
	at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
	at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
	at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
	at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
	at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
	at com.google.inject.AbstractModule.install(AbstractModule.java:122)
	at com.google.inject.servlet.ServletModule.configure(ServletModule.java:52)
	at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
	at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
	at com.google.inject.spi.Elements.getElements(Elements.java:110)
	at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
	at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
	at com.google.inject.Guice.createInjector(Guice.java:96)
	at com.google.inject.Guice.createInjector(Guice.java:73)
	at com.google.inject.Guice.createInjector(Guice.java:62)
	at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:417)
	at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:465)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1389)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1498)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1699)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @46d21ee0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
深坑2:

hdfs namenode -format后环境起不来,需要清理下datanode下的文件,是否对存储的数据有影响,还需要测试

深坑3:

3.x版本后,访问namenode节点的端口为9870

yarn管理页面为 http://namenodeip:8088/cluster

深坑4:

hdfs-site.xml 中路径要加file:/ 否则上传文件会出现

hadoop fs -ls /
ls: Call From 127.0.1.1 to 0.0.0.0:9000 failed on connection exception: 
java.net.ConnectException: 拒绝连接; 
For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

查看日志报错如下:
2021-09-28 09:59:31,621 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/hadoop/namenode/data is in an inconsistent state: storage directory does not exist or is not accessible.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

4、安装Spark

1. 下载对应版本spark

https://spark.apache.org/downloads.html

2、上传并解压到自指定路径下
tar -zxvf spark-3.1.2-bin-hadoop3.2.tgz

mv spark-3.1.2-bin-hadoop3.2 /usr/local/spark-3.1.2
  • 1
  • 2
  • 3
3、修改/etc/profile并生效
export SPARK_HOME=/usr/local/spark-3.1.2
export PATH=$PATH:$SPARK_HOME:/bin:$SPARK_HOME:/sbin
  • 1
  • 2
4、配置配置文件

/usr/local/spark-3.1.2/conf

spark-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_301
export HADOOP_CONF_DIR=/usr/local/hadoop-3.3.1/etc/hadoop
export SPARK_MASTER_HOST=hadoop001
export SPARK_LOCAL_DIRS=/usr/local/spark-3.1.2
  • 1
  • 2
  • 3
  • 4

workers

hadoop001
hadoop002
hadoop003
  • 1
  • 2
  • 3
5、将spark分发给其他节点
6、启动spark

sbin下执行
./start-all.sh

tips
深坑1:

workers中记得要删除原来的localhost,否则会出现无法启动spark的现象

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/611282
推荐阅读
相关标签
  

闽ICP备14008679号