当前位置:   article > 正文

flink集群的安装与搭建_flink安装

flink安装

注:(前提条件jdk,hadoop的环境,flink安装包)

1.解压flink安装包

[root@master /]# tar -zxvf /h3cu/flink-1.14.0-bin-scala_2.12.tgz -C /usr/local/src/

2.进入到安装目录改名

  1. [root@master /]# cd /usr/local/src/
  2. [root@master src]# ls
  3. flink-1.14.0 hadoop hbase hive jdk spark zk
  4. [root@master src]# mv flink-1.14.0/ flink #改成flink方便后面环境变量配置
  5. [root@master src]# ls
  6. flink hadoop hbase hive jdk spark zk

3.编辑配置环境变量

  1. [root@master src]# vi /etc/profile
  2. ##在文件末尾追加一下内容
  3. export FLINK_HOME=/usr/local/src/flink
  4. export PATH=$PATH:$FLINK_HOME/bin
  5. export HADOOP_CLASSPATH=`hadoop classpath` #此处不配的可能会导致后面报错
  6. [root@master src]# source /etc/profile #使环境变量生效

4.进入到flink的conf下编辑配置文件

  1. [root@master src]# cd flink/conf/
  2. [root@master conf]# ls
  3. flink-conf.yaml log4j.properties logback-session.xml workers
  4. log4j-cli.properties log4j-session.properties logback.xml zoo.cfg
  5. log4j-console.properties logback-console.xml masters
  6. [root@master conf]# vi flink-conf.yaml
  7. jobmanager.rpc.address: master ####
  8. [root@master conf]# vi workers
  9. master
  10. slave1
  11. slave2

5.分发flink及环境变量

  1. [root@master conf]# scp /etc/profile slave1:/etc/
  2. profile 100% 2576 1.5MB/s 00:00
  3. [root@master conf]# scp /etc/profile slave2:/etc/
  4. profile 100% 2576 1.4MB/s 00:00
  5. [root@master conf]# scp -r /usr/local/src/flink/ slave1:/usr/local/src/
  6. [root@master conf]# scp -r /usr/local/src/flink/ slave2:/usr/local/src/

6.flink词频统计测试

[root@master conf]# flink run -m yarn-cluster -p 2 /usr/local/src/flink/examples/batch/WordCount.jar

7.很显然执行此命令后结果已经可以看到但出现以下报错

  1. Exception in thread "Thread-5" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you store classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third party library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-classloader'.
  2. at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
  3. at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:183)
  4. at org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2780)
  5. at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3036)
  6. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2995)
  7. at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968)
  8. at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
  9. at org.apache.hadoop.conf.Configuration.get(Configuration.java:1200)
  10. at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1812)
  11. at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789)
  12. at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183)
  13. at org.apache.hadoop.util.ShutdownHookManager.shutdownExecutor(ShutdownHookManager.java:145)
  14. at org.apache.hadoop.util.ShutdownHookManager.access$300(ShutdownHookManager.java:65)
  15. at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:102)

 8.解决问题

  1. [root@master conf]# vi flink-conf.yaml
  2. #在文件空白处加入一下(切记空格不可忽视)
  3. classloader.check-leaked-classloader: false

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/598033
推荐阅读
相关标签
  

闽ICP备14008679号