当前位置:   article > 正文

Spark和Hadoop的安装

Spark和Hadoop的安装

实验内容和要求

1.安装HadoopSpark

       进入Linux系统,完成Hadoop伪分布式模式的安装。完成Hadoop的安装以后,再安装Spark(Local模式)。

2HDFS常用操作

        使用hadoop用户名登录进入Linux系统,启动Hadoop,参照相关Hadoop书籍或网络资料,或者也可以参考本教程官网的“实验指南”栏目的“HDFS操作常用Shell命令”,使用Hadoop提供的Shell命令完成如下操作:

        (1)启动Hadoop,在HDFS中创建用户目录“/user/hadoop”;

        (2)在Linux系统的本地文件系统的“/home/hadoop”目录下新建一个文本文件test.txt,并在该文件中随便输入一些内容,然后上传到HDFS的“/user/hadoop”目录下;

        (3)把HDFS中“/user/hadoop”目录下的test.txt文件,下载到Linux系统的本地文件系统中的“/home/hadoop/下载”目录下;

        (4)将HDFS中“/user/hadoop”目录下的test.txt文件的内容输出到终端中进行显示;

        (5)在HDFS中的“/user/hadoop”目录下,创建子目录input,把HDFS中“/user/hadoop”目录下的test.txt文件,复制到“/user/hadoop/input”目录下;

        (6)删除HDFS中“/user/hadoop”目录下的test.txt文件,删除HDFS中“/user/hadoop”目录下的input子目录及其子目录下的所有内容。

3.  Spark读取文件系统的数据

        (1)在spark-shell中读取Linux系统本地文件“/home/hadoop/test.txt”,然后统计出文件的行数;

        (2)在spark-shell中读取HDFS系统文件“/user/hadoop/test.txt”(如果该文件不存在,请先创建),然后,统计出文件的行数;

        (3)编写独立应用程序,读取HDFS系统文件“/user/hadoop/test.txt”(如果该文件不存在,请先创建),然后,统计出文件的行数;通过sbt工具将整个应用程序编译打包成 JAR包,并将生成的JAR包通过 spark-submit 提交到 Spark 中运行命令。

实验环境

VMware 16.1.2 build-17966106

ubuntu-22.04.4-desktop-amd64.iso

Java 11

scala-2.13.13.tgz

hadoop-3.3.6.tar.gz

spark-3.5.1-bin-hadoop3-scala2.13.tgz

sbt-1.9.9.tgz

安装JDK

安装Java

  1. sudo apt update
  2. sudo apt upgrade
  3. sudo apt-get install openjdk-11-jre openjdk-11-jdk

 配置环境变量

vim ~/.bashrc

export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

让路径生效

source ~/.bashrc

 验证是否成功

安装Scala

下载解压

Scala 2.13.13 | The Scala Programming Languageicon-default.png?t=N7T8https://www.scala-lang.org/download/2.13.13.html确保文件的路径是~/下载/scala-2.13.13.tgz

将文件解压到/usr/local下并且更名为scala

  1. sudo tar -zxf ~/下载/scala-2.13.13.tgz -C /usr/local
  2. cd /usr/local/
  3. sudo mv ./scala-2.13.13 ./scala

配置

让普通用户拥有对scala目录的权限

sudo chown -R hadoop ./scala 

配置环境变量 

vim ~/.bashrc

export PATH=$PATH:/usr/local/scala/bin

source ~/.bashrc

验证是否成功 

安装ssh 

安装

sudo apt install openssh-server

登录

ssh localhost

切换到root用户

su –

修改sshd_config

vim /etc/ssh/sshd_config

添加 PasswordAuthentication yes

配置免密登录

  1. exit
  2. cd ~/.ssh/
  3. cat ./id_rsa.pub >> ./authorized_keys
  4. ssh-keygen -t rsa

一直回车即可

安装hadoop

https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gzicon-default.png?t=N7T8https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz

下载解压

  1. sudo tar -zxf ~/下载/hadoop-3.3.6.tar.gz -C /usr/local
  2. cd /usr/local/
  3. sudo mv ./hadoop-3.3.6/ ./hadoop

配置

sudo chown -R hadoop ./hadoop       # 修改文件权限

添加hadoop环境变量

vim ~/.bashrc

export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$HADOOP_HOME/bin

export PATH=$PATH:$HADOOP_HOME/sbin

修改hadoop-env.sh与yarn-env.sh文件

  1. cd /usr/local/hadoop/etc/hadoop
  2. vim hadoop-env.sh
  3. vim yarn-env.sh

在最后添加

export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

修改core-site.xml 和 hdfs-site.xml

  1. cd /usr/local/hadoop/etc/hadoop/
  2. vim core-site.xml

 将<configuration>内容修改如下:

  1. <configuration>
  2. <property>
  3. <name>hadoop.tmp.dir</name>
  4. <value>file:/usr/local/hadoop/tmp</value>
  5. <description>Abase for other temporary directories.</description>
  6. </property>
  7. <property>
  8. <name>fs.defaultFS</name>
  9. <value>hdfs://localhost:9000</value>
  10. </property>
  11. <property>
  12. <name>hadoop.http.staticuser.user</name> #解决web端无法删除上传文件
  13. <value>hadoop</value>
  14. </property>
  15. </configuration>
vim hdfs-site.xml

  将<configuration>内容修改如下:

  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>1</value>
  5. </property>
  6. <property>
  7. <name>dfs.namenode.name.dir</name>
  8. <value>file:/usr/local/hadoop/tmp/dfs/name</value>
  9. </property>
  10. <property>
  11. <name>dfs.datanode.data.dir</name>
  12. <value>file:/usr/local/hadoop/tmp/dfs/data</value>
  13. </property>
  14. </configuration>

格式化NameNode (仅需要执行一次即可,之后不需要执行)

  1. cd /usr/local/hadoop
  2. ./bin/hdfs namenode -format

开启 NameNode 和 DataNode 守护进程

  1. cd /usr/local/hadoop
  2. ./sbin/start-dfs.sh

配置YARN

修改 mapred-site.xml文件

  1. cd /usr/local/hadoop/etc/hadoop
  2. vim mapred-site.xml

  将<configuration>内容修改如下: 

  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. </configuration>

修改vim yarn-site.xml文件

vim vim yarn-site.xml

  将<configuration>内容修改如下: 

  1. <configuration>
  2. <property>
  3. <name>yarn.nodemanager.aux-services</name>
  4. <value>mapreduce_shuffle</value>
  5. </property>
  6. <property>
  7. <name>yarn.resourcemanager.hostname</name>
  8. <value>localhost</value>
  9. </property>
  10. </configuration>

修改start-yarn.sh和stop-yarn.sh

  1. cd ./sbin
  2. vim start-yarn.sh
  3. vim stop-yarn.sh

在文件中加入以下三行:

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

启动YARN

  1. cd /usr/local/hadoop
  2. ./sbin/start-yarn.sh

开启历史服务器

  1. cd /usr/local/hadoop
  2. ./bin/mapred --daemon start historyserver

localhosticon-default.png?t=N7T8http://localhost:9870/

安装Spark

下载 |Apache Sparkicon-default.png?t=N7T8https://spark.apache.org/downloads.html

  1. sudo tar -zxf ./spark-3.5.1-bin-hadoop3-scala2.13.tgz -C /usr/local
  2. cd /usr/local
  3. sudo mv spark-3.5.1-bin-hadoop3-scala2.13/ spark

配置

sudo chown -R hadoop:hadoop spark   # 此处的 hadoop 为你的用户名

修改spark-env.sh

  1. cd /usr/local/spark
  2. cp ./conf/spark-env.sh.template ./conf/spark-env.sh
  3. vim ./conf/spark-env.sh

 在第一行下面添加以下配置信息

  1. export SPARK_MASTER_PORT=7077
  2. export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
  3. export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
  4. export SPARK_MASTER_IP=localhost
  5. export SPARK_LOCAL_IP=localhost

启动spark

  1. cd /usr/local/spark
  2. ./sbin/start-all.sh

测试spark

  1. cd /usr/local/spark
  2. bin/run-example SparkPi 2>&1 |grep "Pi is"

启动shell

  1. cd /usr/local/spark
  2. bin/spark-shell

安装sbt

下载解压

下载 |SBT公司 (scala-sbt.org)icon-default.png?t=N7T8https://www.scala-sbt.org/download/

  1. sudo tar -zxvf ./sbt-1.9.9.tgz -C /usr/local
  2. cd /usr/local/sbt

 下面慢可以用这个

  1. echo "deb https://repo.scala-sbt.org/scalasbt/debian all main" | sudo tee /etc/apt/sources.list.d/sbt.list
  2. echo "deb https://repo.scala-sbt.org/scalasbt/debian /" | sudo tee /etc/apt/sources.list.d/sbt_old.list
  3. curl -sL "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x2EE0EA64E40A89B84B2DF73499E82A75642AC823" | sudo apt-key add
  4. sudo apt-get update
  5. sudo apt-get install sbt

配置

  1. sudo chown -R hadoop /usr/local/sbt
  2. cd /usr/local/sbt
  3. cp ./bin/sbt-launch.jar ./
  4. vim /usr/local/sbt/sbt

 内容如下:

  1. #!/bin/bash
  2. SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
  3. java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"
chmod u+x /usr/local/sbt/sbt

启动sbt

  1. cd /usr/local/sbt
  2. ./sbt sbtVersion

新建项目

  1. sudo mkdir -p /example/sparkapp/src/main/scala
  2. cd /example/sparkapp/src/main/scala
  3. sudo touch SimpleApp.scala
  4. sudo vim SimpleApp.scala

 内容如下:

  1. /* SimpleApp.scala */
  2. import org.apache.spark.SparkContext
  3. import org.apache.spark.SparkContext._
  4. import org.apache.spark.SparkConf
  5. object SimpleApp {
  6. def main(args: Array[String]) {
  7. val logFile = "file:///usr/local/spark/README.md" // Should be some file on your system
  8. val conf = new SparkConf().setAppName("Simple Application")
  9. val sc = new SparkContext(conf)
  10. val logData = sc.textFile(logFile, 2).cache()
  11. val numAs = logData.filter(line => line.contains("a")).count()
  12. val numBs = logData.filter(line => line.contains("b")).count()
  13. println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  14. }
  15. }

创建.sbt文件

  1. cd /example/sparkapp
  2. sudo touch build.sbt
  3. sudo vim build.sbt
  1. name := "Simple Project"
  2. version := "1.0"
  3. scalaVersion := "2.13.13"
  4. libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.1"

打包

/usr/local/sbt/sbt package

 如果出现无法创建文件的错误,需要在前面加一个sudo.或者整个在root用户下面安装配置。

[error] [launcher] error during sbt launcher: java.io.IOException: Could not create directory /sparkapp/target/global-logging: java.nio.file.AccessDeniedException: /sparkapp/target

执行

  1. cd /example/sparkapp
  2. spark-submit --class "SimpleApp" ./target/scala-2.13/simple-project_2.13-1.0.jar 2>&1 | grep "Lines"

安装Maven

安装

apt install maven

新建测试项目

  1. midir -p /example/sparkapp2/src/main/scala
  2. cd /example/sparkapp2/src/main/scala
  3. sudo touch SimpleApp.scala
  4. sudo vim SimpleApp.scala
  1. /* SimpleApp.scala */
  2. import org.apache.spark.SparkContext
  3. import org.apache.spark.SparkContext._
  4. import org.apache.spark.SparkConf
  5. object SimpleApp {
  6. def main(args: Array[String]) {
  7. val logFile = "file:///usr/local/spark/README.md" // Should be some file on your system
  8. val conf = new SparkConf().setAppName("Simple Application")
  9. val sc = new SparkContext(conf)
  10. val logData = sc.textFile(logFile, 2).cache()
  11. val numAs = logData.filter(line => line.contains("a")).count()
  12. val numBs = logData.filter(line => line.contains("b")).count()
  13. println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  14. }
  15. }

创建pom.xml文件

  1. cd /example/sparkapp2
  2. sudo touch pom.xml
  3. sudo vim pom.xml
  1. <project>
  2. <groupId>shuda.hunnu</groupId>
  3. <artifactId>simple-project</artifactId>
  4. <modelVersion>4.0.0</modelVersion>
  5. <name>Simple Project</name>
  6. <packaging>jar</packaging>
  7. <version>1.0</version>
  8. <repositories>
  9. <repository>
  10. <id>jboss</id>
  11. <name>JBoss Repository</name>
  12. <url>http://repository.jboss.com/maven2/</url>
  13. </repository>
  14. </repositories>
  15. <dependencies>
  16. <dependency> <!-- Spark dependency -->
  17. <groupId>org.apache.spark</groupId>
  18. <artifactId>spark-core_2.13</artifactId>
  19. <version>3.5.1</version>
  20. </dependency>
  21. </dependencies>
  22. <build>
  23. <sourceDirectory>src/main/scala</sourceDirectory>
  24. <plugins>
  25. <plugin>
  26. <groupId>org.scala-tools</groupId>
  27. <artifactId>maven-scala-plugin</artifactId>
  28. <executions>
  29. <execution>
  30. <goals>
  31. <goal>compile</goal>
  32. </goals>
  33. </execution>
  34. </executions>
  35. <configuration>
  36. <scalaVersion>2.13.13</scalaVersion>
  37. <args>
  38. <arg>-target:jvm-11</arg>
  39. </args>
  40. </configuration>
  41. </plugin>
  42. </plugins>
  43. </build>
  44. </project>

修改setting.xml文件

  1. sudo vim /usr/share/maven/conf/settings.xml
  2. sudo vim /etc/maven/settings.xml

需要把文件中原本mirror标题的地方给取消注释,然后添加如下内容: 

  1. <mirror>
  2. <id>alimaven</id>
  3. <name>aliyun maven</name>
  4. <url>
  5. http://maven.aliyun.com/nexus/content/groups/public/</url>
  6. <mirrorOf>central</mirrorOf>
  7. </mirror>

 如果标签缺失就会出现如下报错(双标签变成单标签)

  1. [ERROR] Error executing Maven.
  2. [ERROR] 1 problem was encountered while building the effective settings
  3. [FATAL] Non-parseable settings /usr/share/maven/conf/settings.xml: end tag name </settings> must match start tag name <mirrors> from line 146 (position: TEXT seen ...</activeProfiles>\n -->\n</settings>... @261:12) @ /usr/share/maven/conf/settings.xml, line 261, column 12

打包执行

.jar文件的路径可能会发生改变。

  1. sudo /usr/share/maven/bin/mvn package
  2. spark-submit --class "SimpleApp" ./target/simple-project-1.0.jar 2>&1 | grep "Lines"

启动Hadoop,在HDFS中创建用户目录“/user/hadoop”

  1. cd /usr/local/hadoop
  2. ./sbin/start-dfs.sh #启动HDFS
  3. ./sbin/start-yarn.sh #启动YARN
  4. hadoop fs -mkdir -p /user/Hadoop #创建用户目录/user/hadoop
  5. hadoop fs -ls /user #检查目录是否创建成功

在Linux系统的本地文件系统的“/home/hadoop”目录下新建一个文本文件test.txt,并在该文件中随便输入一些内容,然后上传到HDFS的“/user/hadoop”目录下

  1. sudo vim test.txt
  2. hadoop fs -put test.txt /user/hadoop

不能重复上传put: `/user/hadoop/test.txt': File exists

把HDFS中“/user/hadoop”目录下的test.txt文件,下载到Linux系统的本地文件系统中的“/home/hadoop/下载”目录下

  1. sudo rm test.txt #先将原始位置上面的test.txt删除
  2. hadoop fs -get /user/hadoop/test.txt /home/hadoop/

将HDFS中“/user/hadoop”目录下的test.txt文件的内容输出到终端中进行显示

hadoop fs -cat /user/hadoop/test.txt

在HDFS中的“/user/hadoop”目录下,创建子目录input,把HDFS中“/user/hadoop”目录下的test.txt文件,复制到“/user/hadoop/input”目录下

hadoop fs -mkdir -p /user/hadoop/input

删除HDFS中“/user/hadoop”目录下的test.txt文件,删除HDFS中“/user/hadoop”目录下的input子目录及其子目录下的所有内容

hadoop fs -rm /user/hadoop/test.txt

hadoop fs -rm -r /user/hadoop/input  #用hdfs dfs 替代hadoop fs也行

这里删除目录是用-r,不能用-rf。-rm: Illegal option -rf

在spark-shell中读取Linux系统本地文件“/home/hadoop/test.txt”,然后统计出文件的行数

  1. cd /usr/local/spark
  2. ./sbin/start-all.sh
  3. bin/spark-shell #启动spark-shell

  1. val fileData=sc.textFile("file:/home/hadoop/test.txt")
  2. val count=fileData.count()

在spark-shell中读取HDFS系统文件“/user/hadoop/test.txt”(如果该文件不存在,请先创建),然后,统计出文件的行数 

  1. val fileData=sc.textFile("/user/hadoop/test.txt")
  2. val count=fileData.count()

不写file,默认是hdfs

  1. val fileData=sc.textFile("hdfs:/user/hadoop/test.txt")
  2. val count=fileData.count()

编写独立应用程序,读取HDFS系统文件“/user/hadoop/test.txt”(如果该文件不存在,请先创建),然后,统计出文件的行数;通过sbt工具将整个应用程序编译打包成 JAR包,并将生成的JAR包通过 spark-submit 提交到 Spark 中运行命令

创建项目

  1. sudo mkdir -p /example/sparkapp3/src/main/scala
  2. cd /example/sparkapp3/src/main/scala
  3. sudo touch SimpleApp.scala
  4. sudo vim SimpleApp.scala
  1. /*HDFStest.scala */
  2. import org.apache.spark.SparkContext
  3. import org.apache.spark.SparkContext._
  4. import org.apache.spark.SparkConf
  5. object SimpleApp {
  6. def main(args: Array[String]): Unit = {
  7. val logFile ="hdfs:/user/hadoop/test.txt"
  8. val conf = new SparkConf().setAppName("Simple Application")
  9. val sc = new SparkContext(conf)
  10. val logData = sc.textFile(logFile, 2)
  11. val num = logData.count()
  12. printf("The num of this file is %d\n", num)
  13. }
  14. }

创建.sbt文件 

  1. cd /example/sparkapp3
  2. sudo touch build.sbt
  3. sudo vim build.sbt
  1. name := "Simple Project"
  2. version := "1.0"
  3. scalaVersion := "2.13.13"
  4. libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.1"

 打包执行

这个--class,应该是需要和类名保持一致的,为了方便,我把类名还是改成了SimpleApp.

  1. /usr/local/sbt/sbt package
  2. spark-submit --class " SimpleApp " ./target/scala-2.13/simple-project_2.13-1.0.jar 2>&1 | grep "num"

 如果这个是直接抄网上的话,有的路径不对(这个路径最好与前面保持一致,网上的就是端口,两种方法的路径都不一样,简直是误人子弟),我也不知道他们是怎么运行出来的,就很无语,而且都是给图片。

总结

在HDFS中使用命令和本地差不多,但是还是有点小区别,前面是用hadoop fs -,或者hdfs dfs -,然后命令的参数可能发生了变化,编写scala程序还是有点小问题,主要卡的最久的就是在网上看了一个觉得可以运行出来,结果一直显示路径错误了,结果仔细一看,放的位置都不一样,服了。

需要在.bashrc中粘贴这些语句。

  1. export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
  2. export PATH=$PATH:/usr/local/scala/bin
  3. export HADOOP_HOME=/usr/local/hadoop
  4. export PATH=$PATH:$HADOOP_HOME/bin
  5. export PATH=$PATH:$HADOOP_HOME/sbin
  6. export SPARK_HOME=/usr/local/spark
  7. export PATH=$PATH:$SPARK_HOME/bin
  8. export PATH=$PATH:$SPARK_HOME/sbin
  9. export LD_LIBRARY_PATH=/usr/local/hadoop/lib/native
  10. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  11. export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/572454
推荐阅读
相关标签
  

闽ICP备14008679号