当前位置:   article > 正文

hadoop 3.X 分布式HA集成Kerbos(保姆级教程)

hadoop 3.X 分布式HA集成Kerbos(保姆级教程)

前提:先安装Kerbos

1、创建keytab目录

在每台机器上上提前创建好对应的kertab目录

[hadoop@tv3-hadoop-01 ~]$ sudo mkdir -p /BigData/run/hadoop/keytab/

[hadoop@tv3-hadoop-01 ~]$ sudo mkdir -p /opt/security/

[hadoop@tv3-hadoop-01 ~]$ sudo chown hadoop:hadoop /BigData/run/hadoop/keytab/

[hadoop@tv3-hadoop-01 ~]$ ls -lrt /BigData/run/hadoop/

drwxr-xr-x 2 hadoop hadoop  4096 Jun 26 23:22 keytab

2、创建kerbos证书

进入管理机器,比如tv3-hadoop-01【本例中hadoop服务启动统一使用hadoop用户】

# 进入kadmin

[root@tv3-hadoop-01 ~]# kadmin.local

Authenticating as principal hadoop/admin@EXAMPLE.COM with password.

kadmin.local:  

# 查看用户

kadmin.local:  listprincs

# 创建用户

addprinc -randkey hadoop/tv3-hadoop-01@EXAMPLE.COM

3、证书添加

依次增加其他hdfs节点的验证,并导出到/BigData/run/hadoop/keytab/hadoop.keytab这个文件:

  1. addprinc -randkey hadoop/tv3-hadoop-01@EXAMPLE.COM
  2. addprinc -randkey hadoop/tv3-hadoop-02@EXAMPLE.COM
  3. addprinc -randkey hadoop/tv3-hadoop-03@EXAMPLE.COM
  4. addprinc -randkey hadoop/tv3-hadoop-04@EXAMPLE.COM
  5. addprinc -randkey hadoop/tv3-hadoop-05@EXAMPLE.COM
  6. addprinc -randkey hadoop/tv3-hadoop-06@EXAMPLE.COM
  7. addprinc -randkey HTTP/tv3-hadoop-01@EXAMPLE.COM
  8. addprinc -randkey HTTP/tv3-hadoop-02@EXAMPLE.COM
  9. addprinc -randkey HTTP/tv3-hadoop-03@EXAMPLE.COM
  10. addprinc -randkey HTTP/tv3-hadoop-04@EXAMPLE.COM
  11. addprinc -randkey HTTP/tv3-hadoop-05@EXAMPLE.COM
  12. addprinc -randkey HTTP/tv3-hadoop-06@EXAMPLE.COM
  13. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-01@EXAMPLE.COM
  14. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-02@EXAMPLE.COM
  15. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-03@EXAMPLE.COM
  16. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-04@EXAMPLE.COM
  17. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-05@EXAMPLE.COM
  18. ktadd -k /BigData/run/hadoop/keytab/hadoop.keytab hadoop/tv3-hadoop-06@EXAMPLE.COM
  19. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-01@EXAMPLE.COM
  20. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-02@EXAMPLE.COM
  21. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-03@EXAMPLE.COM
  22. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-04@EXAMPLE.COM
  23. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-05@EXAMPLE.COM
  24. ktadd -k /BigData/run/hadoop/keytab/HTTP.keytab HTTP/tv3-hadoop-06@EXAMPLE.COM

4、权限修改&kertab同步

修改权限到hadoop启动用户,否则会有权限访问问题,并同步到其他hdfs所有服务的节点上(JN,DN,NN,RM,NM)

  1. su - hadoop
  2. sudo chown hadoop:hadoop /BigData/run/hadoop/keytab/*.keytab
  3. scp /BigData/run/hadoop/keytab/hadoop.keytab /BigData/run/hadoop/keytab/HTTP.keytab hadoop@tv3-hadoop-06:/BigData/run/hadoop/keytab

5、修改配置文件

5.1 hdfs-site.xml

  1. <property>
  2. <name>dfs.block.access.token.enable</name>
  3. <value>true</value>
  4. <description>Enable HDFS block access tokens for secure operations</description>
  5. </property>
  6. <property>
  7. <name>dfs.namenode.kerberos.principal</name>
  8. <value>hadoop/_HOST@EXAMPLE.COM</value>
  9. <description>namenode对应的kerberos账户为 nn/主机名@EXAMPLE.CPOM _HOST会自动转换为主机名</description>
  10. </property>
  11. <property>
  12. <name>dfs.namenode.keytab.file</name>
  13. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  14. <description>因为使用-randkey 创建的用户 密码随机不知道,所以需要用免密登录的keytab文件 指定namenode需要用的keytab文件在哪里</description>
  15. </property>
  16. <property>
  17. <name>dfs.namenode.kerberos.internal.spnego.principal</name>
  18. <value>HTTP/_HOST@EXAMPLE.COM</value>
  19. <description>https 相关(如开启namenodeUI)使用的账户</description>
  20. </property>
  21. <property>
  22. <name>dfs.namenode.kerberos.internal.spnego.keytab</name>
  23. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  24. </property>
  25. <property>
  26. <name>dfs.secondary.namenode.kerberos.principal</name>
  27. <value>hadoop/_HOST@EXAMPLE.COM</value>
  28. <description>secondarynamenode使用的账户</description>
  29. </property>
  30. <property>
  31. <name>dfs.secondary.namenode.keytab.file</name>
  32. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  33. <description>sn对应的keytab文件</description>
  34. </property>
  35. <property>
  36. <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
  37. <value>HTTP/_HOST@EXAMPLE.COM</value>
  38. <description>sn需要开启http页面用到的账户</description>
  39. </property>
  40. <property>
  41. <name>dfs.secondary.namenode.kerberos.internal.spnego.keytab</name>
  42. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  43. </property>
  44. <property>
  45. <name>dfs.journalnode.kerberos.principal</name>
  46. <value>hadoop/_HOST@EXAMPLE.COM</value>
  47. </property>
  48. <property>
  49. <name>dfs.journalnode.keytab.file</name>
  50. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  51. </property>
  52. <property>
  53. <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  54. <value>HTTP/_HOST@EXAMPLE.COM</value>
  55. </property>
  56. <property>
  57. <name>dfs.journalnode.kerberos.internal.spnego.keytab</name>
  58. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  59. </property>
  60. <property>
  61. <name>dfs.encrypt.data.transfer</name>
  62. <value>true</value>
  63. <description>数据传输协议激活数据加密</description>
  64. </property>
  65. <property>
  66. <name>dfs.datanode.kerberos.principal</name>
  67. <value>hadoop/_HOST@EXAMPLE.COM</value>
  68. <description>datanode用到的账户</description>
  69. </property>
  70. <property>
  71. <name>dfs.datanode.keytab.file</name>
  72. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  73. <description>datanode用到的keytab文件路径</description>
  74. </property>
  75. <property>
  76. <name>dfs.data.transfer.protection</name>
  77. <value>integrity</value>
  78. </property>
  79. <property>
  80. <name>dfs.https.port</name>
  81. <value>50470</value>
  82. </property>
  83. <!-- required if hdfs support https -->
  84. <property>
  85. <name>dfs.http.policy</name>
  86. <value>HTTPS_ONLY</value>
  87. </property>
  88. <!-- WebHDFS security config -->
  89. <property>
  90. <name>dfs.web.authentication.kerberos.principal</name>
  91. <value>HTTP/_HOST@EXAMPLE.COM</value>
  92. <description>web hdfs 使用的账户</description>
  93. </property>
  94. <property>
  95. <name>dfs.web.authentication.kerberos.keytab</name>
  96. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  97. <description>对应的keytab文件</description>
  98. </property>

5.2 core-site.xml

  1. <property>
  2. <name>dfs.block.access.token.enable</name>
  3. <value>true</value>
  4. <description>Enable HDFS block access tokens for secure operations</description>
  5. </property>
  6. <property>
  7. <name>hadoop.security.authorization</name>
  8. <value>true</value>
  9. <description>是否开启hadoop的安全认证</description>
  10. </property>
  11. <property>
  12. <name>hadoop.security.authentication</name>
  13. <value>kerberos</value>
  14. <description>使用kerberos作为hadoop的安全认证方案</description>
  15. </property>
  16. <property>
  17. <name>hadoop.rpc.protection</name>
  18. <value>authentication</value>
  19. <description>authentication : authentication only (default); integrity : integrity check in addition to authentication; privacy : data encryption in addition to integrity</description>
  20. </property>
  21. <property>
  22. <name>hadoop.security.auth_to_local</name>
  23. <value>
  24. RULE:[2:$1@$0](hadoop@.*EXAMPLE.COM)s/.*/hadoop/
  25. RULE:[2:$1@$0](HTTP@.*EXAMPLE.COM)s/.*/hadoop/
  26. DEFAULT
  27. </value>
  28. </property>

5.3 yarn-site.xml

  1. <property>
  2. <name>hadoop.http.authentication.type</name>
  3. <value>kerberos</value>
  4. </property>
  5. <property>
  6. <name>hadoop.http.filter.initializers</name>
  7. <value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
  8. </property>
  9. <property>
  10. <name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>
  11. <value>false</value>
  12. <description>标记以启用使用RM身份验证筛选器覆盖默认kerberos身份验证筛选器以允许使用委派令牌进行身份验证(如果缺少令牌,则回退到kerberos)。仅适用于http身份验证类型为kerberos的情况。</description>
  13. </property>
  14. <property>
  15. <name>hadoop.http.authentication.kerberos.principal</name>
  16. <value>HTTP/_HOST@EXAMPLE.COM</value>
  17. </property>
  18. <property>
  19. <name>hadoop.http.authentication.kerberos.keytab</name>
  20. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  21. </property>
  22. <property>
  23. <name>yarn.acl.enable</name>
  24. <value>true</value>
  25. </property>
  26. <property>
  27. <name>yarn.web-proxy.principal</name>
  28. <value>HTTP/_HOST@EXAMPLE.COM</value>
  29. </property>
  30. <property>
  31. <name>yarn.web-proxy.keytab</name>
  32. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  33. </property>
  34. <property>
  35. <name>yarn.resourcemanager.principal</name>
  36. <value>hadoop/_HOST@EXAMPLE.COM</value>
  37. </property>
  38. <property>
  39. <name>yarn.resourcemanager.keytab</name>
  40. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  41. </property>
  42. <!-- nodemanager -->
  43. <property>
  44. <name>yarn.nodemanager.principal</name>
  45. <value>hadoop/_HOST@EXAMPLE.COM</value>
  46. </property>
  47. <property>
  48. <name>yarn.nodemanager.keytab</name>
  49. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  50. </property>
  51. <property>
  52. <name>yarn.nodemanager.container-executor.class</name>
  53. <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
  54. </property>
  55. <property>
  56. <name>yarn.nodemanager.linux-container-executor.group</name>
  57. <value>hadoop</value>
  58. </property>
  59. <property>
  60. <name>yarn.nodemanager.linux-container-executor.path</name>
  61. <value>/BigData/run/hadoop/bin/container-executor</value>
  62. </property>
  63. <!-- webapp webapp configs -->
  64. <property>
  65. <name>yarn.resourcemanager.webapp.spnego-principal</name>
  66. <value>HTTP/_HOST@EXAMPLE.COM</value>
  67. </property>
  68. <property>
  69. <name>yarn.resourcemanager.webapp.spnego-keytab-file</name>
  70. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  71. </property>
  72. <property>
  73. <name>yarn.timeline-service.http-authentication.type</name>
  74. <value>kerberos</value>
  75. <description>Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#</description>
  76. </property>
  77. <property>
  78. <name>yarn.timeline-service.principal</name>
  79. <value>hadoop/_HOST@EXAMPLE.COM</value>
  80. </property>
  81. <property>
  82. <name>yarn.timeline-service.keytab</name>
  83. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  84. </property>
  85. <property>
  86. <name>yarn.timeline-service.http-authentication.kerberos.principal</name>
  87. <value>HTTP/_HOST@EXAMPLE.COM</value>
  88. </property>
  89. <property>
  90. <name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
  91. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  92. </property>
  93. <property>
  94. <name>yarn.nodemanager.container-localizer.java.opts</name>
  95. <value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
  96. </property>
  97. <property>
  98. <name>yarn.nodemanager.health-checker.script.opts</name>
  99. <value>-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
  100. </property>
  101. <property>
  102. <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name>
  103. <value>hadoop</value>
  104. </property>
  105. <property>
  106. <name>yarn.nodemanager.linux-container-executor.group</name>
  107. <value>hadoop</value>
  108. </property>

5.4 mapred-site.xml

  1. <property>
  2. <name>mapreduce.map.java.opts</name>
  3. <value>-Xmx1638M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
  4. </property>
  5. <property>
  6. <name>mapreduce.reduce.java.opts</name>
  7. <value>-Xmx3276M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=tv3-hadoop-01:88</value>
  8. </property>
  9. <property>
  10. <name>mapreduce.jobhistory.keytab</name>
  11. <value>/BigData/run/hadoop/keytab/hadoop.keytab</value>
  12. </property>
  13. <property>
  14. <name>mapreduce.jobhistory.principal</name>
  15. <value>hadoop/_HOST@EXAMPLE.COM</value>
  16. </property>
  17. <property>
  18. <name>mapreduce.jobhistory.webapp.spnego-keytab-file</name>
  19. <value>/BigData/run/hadoop/keytab/HTTP.keytab</value>
  20. </property>
  21. <property>
  22. <name>mapreduce.jobhistory.webapp.spnego-principal</name>
  23. <value>HTTP/_HOST@EXAMPLE.COM</value>
  24. </property>

5.5 配置文件同步到各个节点

  1. cd /BigData/run/hadoop/etc/hadoop
  2. scp hdfs-site.xml yarn-site.xml core-site.xml mapred-site.xml hadoop@tv3-hadoop-06:/BigData/run/hadoop/etc/hadoop/

6、配置SSL(开启https)

6.1 创建https证书(需要在每台机器上执行)
 

[hadoop@tv3-hadoop-01 hadoop]# mkdir -p /opt/security/kerberos_https

[hadoop@tv3-hadoop-01 hadoop]# cd /opt/security/kerberos_https

6.2 在任意一个hadoop节点生成CA证书

  1. [root@tv3-hadoop-01 kerberos_https]# openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj /C=CN/ST=shanxi/L=xian/O=hlk/OU=hlk/CN=tv3-hadoop01
  2. Generating a 2048 bit RSA private key
  3. ...........................................................................................+++
  4. .................................................................................+++
  5. writing new private key to 'hdfs_ca_key'
  6. Enter PEM pass phrase:
  7. Verifying - Enter PEM pass phrase:
  8. -----
  9. [root@tv3-hadoop-01 kerberos_https]# ls -lrt
  10. total 8
  11. -rw-r--r-- 1 root root 1834 Jun 29 09:45 hdfs_ca_key
  12. -rw-r--r-- 1 root root 1302 Jun 29 09:45 hdfs_ca_cert

6.3 将上面生成的CA 证书发送到每个节点上

  1. scp -r /opt/security/kerberos_https root@tv3-hadoop-06:/opt/security/

 6.4 在每个hadoop节点上制作证书

  1. cd /opt/security/kerberos_https
  2. # 所有需要输入密码的地方全部输入123456(方便起见,如果你对密码有要求请自行修改)
  3. # 1 输入密码和确认密码:123456,此命令成功后输出keystore文件
  4. name="CN=$HOSTNAME, OU=hlk, O=hlk, L=xian, ST=shanxi, C=CN"
  5. #需要输入第一步输入的密码四次
  6. keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "$name"
  7. # 2 输入密码和确认密码:123456,提示是否信任证书:输入yes,此命令成功后输出truststore文件
  8. keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert
  9. # 3 输入密码和确认密码:123456,此命令成功后输出cert文件
  10. keytool -certreq -alias localhost -keystore keystore -file cert
  11. # 4 此命令成功后输出cert_signed文件
  12. openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial
  13. # 5 输入密码和确认密码:123456,是否信任证书,输入yes,此命令成功后更新keystore文件
  14. keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert
  15. keytool -keystore keystore -alias localhost -import -file cert_signed
  16. [root@tv3-hadoop-06 kerberos_https]# ls -lrt
  17. total 28
  18. -rw-r--r-- 1 root root 1302 Jun 29 09:57 hdfs_ca_cert
  19. -rw-r--r-- 1 root root 1834 Jun 29 09:57 hdfs_ca_key
  20. -rw-r--r-- 1 root root 984 Jun 29 10:03 truststore
  21. -rw-r--r-- 1 root root 1085 Jun 29 10:03 cert
  22. -rw-r--r-- 1 root root 17 Jun 29 10:04 hdfs_ca_cert.srl
  23. -rw-r--r-- 1 root root 1188 Jun 29 10:04 cert_signed
  24. -rw-r--r-- 1 root root 4074 Jun 29 10:04 keystore

6.5 修改SSL server文件

在${HADOOP_HOME}/etc/hadoop目录构建ssl-server.xml文件

  1. <configuration>
  2.     <property>
  3.         <name>ssl.server.truststore.location</name>
  4.         <value>/opt/security/kerberos_https/truststore</value>
  5.         <description>Truststore to be used by NN and DN. Must be specified.</description>
  6.     </property>
  7.     <property>
  8.         <name>ssl.server.truststore.password</name>
  9.         <value>123456</value>
  10.         <description>Optional. Default value is "". </description>
  11.     </property>
  12.     <property>
  13.         <name>ssl.server.truststore.type</name>
  14.         <value>jks</value>
  15.         <description>Optional. The keystore file format, default value is "jks".</description>
  16.     </property>
  17.     <property>
  18.         <name>ssl.server.truststore.reload.interval</name>
  19.         <value>10000</value>
  20.         <description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
  21.     </property>
  22.     <property>
  23.         <name>ssl.server.keystore.location</name>
  24.         <value>/opt/security/kerberos_https/keystore</value>
  25.         <description>Keystore to be used by NN and DN. Must be specified.</description>
  26.     </property>
  27.     <property>
  28.         <name>ssl.server.keystore.password</name>
  29.         <value>123456</value>
  30.         <description>Must be specified.</description>
  31.     </property>
  32.     <property>
  33.         <name>ssl.server.keystore.keypassword</name>
  34.         <value>123456</value>
  35.         <description>Must be specified.</description>
  36.     </property>
  37.     <property>
  38.         <name>ssl.server.keystore.type</name>
  39.         <value>jks</value>
  40.         <description>Optional. The keystore file format, default value is "jks".</description>
  41.     </property>
  42.     <property>
  43.         <name>ssl.server.exclude.cipher.list</name>
  44.         <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  45.         SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  46.         SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  47.         SSL_RSA_WITH_RC4_128_MD5</value>
  48.         <description>Optional. The weak security cipher suites that you want excludedfrom SSL communication.</description>
  49.     </property>
  50.    
  51. </configuration>

6.6 修改SSL-client文件

  1. <configuration>
  2. <property>
  3. <name>ssl.client.truststore.location</name>
  4. <value>/opt/security/kerberos_https/truststore</value>
  5. <description>Truststore to be used by clients like distcp. Must be specified. </description>
  6. </property>
  7. <property>
  8. <name>ssl.client.truststore.password</name>
  9. <value>123456</value>
  10. <description>Optional. Default value is "". </description>
  11. </property>
  12. <property>
  13. <name>ssl.client.truststore.type</name>
  14. <value>jks</value>
  15. <description>Optional. The keystore file format, default value is "jks".</description>
  16. </property>
  17. <property>
  18. <name>ssl.client.truststore.reload.interval</name>
  19. <value>10000</value>
  20. <description>Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds). </description>
  21. </property>
  22. <property>
  23. <name>ssl.client.keystore.location</name>
  24. <value>/opt/security/kerberos_https/keystore</value>
  25. <description>Keystore to be used by clients like distcp. Must be specified. </description>
  26. </property>
  27. <property>
  28. <name>ssl.client.keystore.password</name>
  29. <value>123456</value>
  30. <description>Optional. Default value is "". </description>
  31. </property>
  32. <property>
  33. <name>ssl.client.keystore.keypassword</name>
  34. <value>123456</value>
  35. <description>Optional. Default value is "". </description>
  36. </property>
  37. <property>
  38. <name>ssl.client.keystore.type</name>
  39. <value>jks</value>
  40. <description>Optional. The keystore file format, default value is "jks". </description>
  41. </property>
  42. </configuration>

6.7 hdfs配置HTTPS(修改后需要同步到每个节点)

  1.   <property>
  2.            <name>dfs.http.policy</name>
  3.            <value>HTTPS_ONLY</value>
  4.            <description>所有开启的web页面均使用https, 细节在ssl server 和client那个配置文件内配置</description>
  5.        </property>

7、启动hadoop基础测试

7.1 HA模式启动顺序

建议依次启动JN、NN、ZKFC、DN、RM、NM服务

7.2 启动JN(每个服务启动之前需要init单独的节点)

  1. kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
  2. ##重启JournalNode
  3. hadoop-daemon.sh stop journalnode && hadoop-daemon.sh start journalnode
  4. ##启动JournalNode
  5. hadoop-daemon.sh start journalnode
  6. ##停止JournalNode
  7. hadoop-daemon.sh stop journalnode

7.3 启动NameNode和ZKFC服务

如果是新集群,需要提前format

hadoop namenode -format
  1. kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
  2. ##重启nn
  3. hadoop-daemon.sh stop namenode && hadoop-daemon.sh start namenode
  4. ##启动nn
  5. hadoop-daemon.sh start namenode
  6. ##停止nn
  7. hadoop-daemon.sh stop namenode
  8. ##重启zkfc
  9. hadoop-daemon.sh stop zkfc && hadoop-daemon.sh start zkfc
  10. ##启动zkfc
  11. hadoop-daemon.sh start zkfc
  12. ##停止zkfc
  13. hadoop-daemon.sh stop zkfc

7.4 启动DatanNode服务

  1. kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
  2. ##重启dn
  3. hadoop-daemon.sh stop datanode && hadoop-daemon.sh start datanode
  4. ##启动dn
  5. hadoop-daemon.sh start datanode
  6. ##停止dn
  7. hadoop-daemon.sh stop datanode

7.5 验证HA功能(多NameNode)

[hadoop@tv3-hadoop-01 hadoop]$ hdfs haadmin -failover nn2 nn1

7.6 验证HDFS文件读写

  1. [hadoop@tv3-hadoop-01 ~]$ echo '123' > b
  2. [hadoop@tv3-hadoop-01 ~]$ hdfs dfs -put -f b /tmp/
  3. [hadoop@tv3-hadoop-01 ~]$ hdfs dfs -cat /tmp/b
  4. 123
  5. [hadoop@tv3-hadoop-01 ~]$

7.7 启动HTTPS后 webui无法访问UI状态

7.8 启动Resoucemanager服务

  1. kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
  2. ##重启rm
  3. yarn --daemon stop resourcemanager && yarn --daemon start resourcemanager
  4. ##启动rm
  5. yarn --daemon start resourcemanager
  6. ##停止rm
  7. yarn --daemon stop resourcemanager

7.9. 启动Nodemanager服务

  1. kinit -kt /BigData/run/hadoop/keytab/hadoop.keytab hadoop/$HOSTNAME@EXAMPLE.COM
  2. ##重启rm
  3. yarn --daemon stop nodemanager && yarn --daemon start nodemanager
  4. ##启动rm
  5. yarn --daemon start nodemanager
  6. ##停止rm
  7. yarn --daemon stop nodemanager

7.10 验证Mapreduce job

hadoop jar /BigData/run/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 5 10

看到下面结果代表YARN已经部署ok

  1. Job Finished in 66.573 seconds
  2. Estimated value of Pi is 3.28000000000000000000
  3. [hadoop@tv3-hadoop-01 hadoop]$
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/794815
推荐阅读
相关标签
  

闽ICP备14008679号