当前位置:   article > 正文

Hadoop 配置 Kerberos 认证_hadoop kerberos认证(1),2024年最新真香_hadoopkerberos认证

hadoopkerberos认证

先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新大数据全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip204888 (备注大数据)
img

正文

# 执行 kadmin 输入密码, 进入Kerberos的admin后台
kadmin

# 创建账户
addprinc -randkey dn/bigdata1.example.com@EXAMPLE.COM    
addprinc -randkey HTTP/bigdata1.example.com@EXAMPLE.COM

addprinc -randkey nm/bigdata1.example.com@EXAMPLE.COM
 
# 防止启动或者操作的过程中需要输入密码,创建免密登录的keytab文件
# 创建nn、sn、HTTP账户的keytab
ktadd -k /etc/security/keytabs/dn.service.keytab dn/bigdata1.example.com@EXAMPLE.COM
ktadd -k /etc/security/keytabs/spnego.service.keytab HTTP/bigdata1.example.com@EXAMPLE.COM

ktadd -k /etc/security/keytabs/nm.service.keytab nm/bigdata1.example.com@EXAMPLE.COM

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
# 执行 kadmin 输入密码, 进入Kerberos的admin后台
kadmin

# 创建账户
addprinc -randkey dn/bigdata2.example.com@EXAMPLE.COM   
addprinc -randkey HTTP/bigdata2.example.com@EXAMPLE.COM

addprinc -randkey nm/bigdata2.example.com@EXAMPLE.COM
 
# 防止启动或者操作的过程中需要输入密码,创建免密登录的keytab文件
# 创建nn、sn、HTTP账户的keytab
ktadd -k /etc/security/keytabs/dn.service.keytab dn/bigdata2.example.com@EXAMPLE.COM
ktadd -k /etc/security/keytabs/spnego.service.keytab HTTP/bigdata2.example.com@EXAMPLE.COM

ktadd -k /etc/security/keytabs/nm.service.keytab nm/bigdata2.example.com@EXAMPLE.COM

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

3、Hadoop 配置文件

3.1 权限设置
# 将如下内容保存到.sh内,然后执行 sh xxx.sh 以root执行
# 其中内部的配置根据自己的目录设置修改
HADOOP_HOME=/bigdata/hadoop-3.3.2
DFS_NAMENODE_NAME_DIR=/data/nn
DFS_DATANODE_DATA_DIR=/data/dn
NODEMANAGER_LOCAL_DIR=/data/nm-local
NODEMANAGER_LOG_DIR=/data/nm-log
MR_HISTORY=/data/mr-history
if [ ! -n "$HADOOP_HOME" ];then
        echo "请填入hadoop home 路径"
        exit
fi
chgrp -R hadoop $HADOOP_HOME
chown -R hdfs:hadoop $HADOOP_HOME
chown root:hadoop $HADOOP_HOME
chown hdfs:hadoop $HADOOP_HOME/sbin/distribute-exclude.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/hadoop-daemon.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/hadoop-daemons.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/hdfs-config.cmd
chown hdfs:hadoop $HADOOP_HOME/sbin/hdfs-config.sh
chown mapred:hadoop $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/refresh-namenodes.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/slaves.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/start-all.cmd
chown hdfs:hadoop $HADOOP_HOME/sbin/start-all.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/start-balancer.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/start-dfs.cmd
chown hdfs:hadoop $HADOOP_HOME/sbin/start-dfs.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/start-secure-dns.sh
chown yarn:hadoop $HADOOP_HOME/sbin/start-yarn.cmd
chown yarn:hadoop $HADOOP_HOME/sbin/start-yarn.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-all.cmd
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-all.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-balancer.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-dfs.cmd
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-dfs.sh
chown hdfs:hadoop $HADOOP_HOME/sbin/stop-secure-dns.sh
chown yarn:hadoop $HADOOP_HOME/sbin/stop-yarn.cmd
chown yarn:hadoop $HADOOP_HOME/sbin/stop-yarn.sh
chown yarn:hadoop $HADOOP_HOME/sbin/yarn-daemon.sh
chown yarn:hadoop $HADOOP_HOME/sbin/yarn-daemons.sh
chown mapred:hadoop $HADOOP_HOME/bin/mapred*
chown yarn:hadoop $HADOOP_HOME/bin/yarn*
chown hdfs:hadoop $HADOOP_HOME/bin/hdfs*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/configuration.xsl
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/core-site.xml
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/hadoop-*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/hdfs-*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/httpfs-*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/kms-*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/log4j.properties
chown mapred:hadoop $HADOOP_HOME/etc/hadoop/mapred-*
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/slaves
chown hdfs:hadoop $HADOOP_HOME/etc/hadoop/ssl-*
chown yarn:hadoop $HADOOP_HOME/etc/hadoop/yarn-*
chmod 755 -R $HADOOP_HOME/etc/hadoop/*
chown root:hadoop $HADOOP_HOME/etc
chown root:hadoop $HADOOP_HOME/etc/hadoop
chown root:hadoop $HADOOP_HOME/etc/hadoop/container-executor.cfg
chown root:hadoop $HADOOP_HOME/bin/container-executor
chown root:hadoop $HADOOP_HOME/bin/test-container-executor
chmod 6050 $HADOOP_HOME/bin/container-executor
chown 6050 $HADOOP_HOME/bin/test-container-executor
mkdir $HADOOP_HOME/logs
mkdir $HADOOP_HOME/logs/hdfs
mkdir $HADOOP_HOME/logs/yarn
chown root:hadoop $HADOOP_HOME/logs
chmod 775 $HADOOP_HOME/logs
chown hdfs:hadoop $HADOOP_HOME/logs/hdfs
chmod 755 -R $HADOOP_HOME/logs/hdfs
chown yarn:hadoop $HADOOP_HOME/logs/yarn
chmod 755 -R $HADOOP_HOME/logs/yarn
chown -R hdfs:hadoop $DFS_DATANODE_DATA_DIR
chown -R hdfs:hadoop $DFS_NAMENODE_NAME_DIR
chmod 700 $DFS_DATANODE_DATA_DIR
chmod 700 $DFS_NAMENODE_NAME_DIR
chown -R yarn:hadoop $NODEMANAGER_LOCAL_DIR
chown -R yarn:hadoop $NODEMANAGER_LOG_DIR
chmod 770 $NODEMANAGER_LOCAL_DIR
chmod 770 $NODEMANAGER_LOG_DIR
chown -R mapred:hadoop $MR_HISTORY
chmod 770 $MR_HISTORY

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
3.2 hadoop 配置:

环境变量相关配置非必要,如果 /etc/profile 中配置了,可以在 hadoop 配置文件中省略

hadoop-env.sh
# 非必须,如果 /etc/profile 中配置了,这里可以省略
export JAVA_HOME=/usr/local/jdk1.8.0_221
export HADOOP_HOME=/bigdata/hadoop-3.3.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
yarn-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_221
export HADOOP_HOME=/bigdata/hadoop-3.3.0
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn

  • 1
  • 2
  • 3
  • 4
  • 5
mapred-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_221

  • 1
  • 2
core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bigdata0.example.com:8020</value>
        <description></description>
    </property>

    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
        <description></description>
    </property>

    <!-- 以下是 Kerberos 相关配置 -->
    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
        <description>是否开启hadoop的安全认证</description>
    </property>

    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
        <description>使用kerberos作为hadoop的安全认证方案</description>
    </property>
    <property>
        <name>hadoop.security.auth_to_local</name>
        <value>
            RULE:[2:$1@$0](nn@.*EXAMPLE.COM)s/.*/hdfs/
            RULE:[2:$1@$0](sn@.*EXAMPLE.COM)s/.*/hdfs/
            RULE:[2:$1@$0](dn@.*EXAMPLE.COM)s/.*/hdfs/
            RULE:[2:$1@$0](nm@.*EXAMPLE.COM)s/.*/yarn/
            RULE:[2:$1@$0](rm@.*EXAMPLE.COM)s/.*/yarn/
            RULE:[2:$1@$0](tl@.*EXAMPLE.COM)s/.*/yarn/
            RULE:[2:$1@$0](jhs@.*EXAMPLE.COM)s/.*/mapred/
            RULE:[2:$1@$0](HTTP@.*EXAMPLE.COM)s/.*/hdfs/
            DEFAULT
        </value>
        <description>匹配规则,将 Kerberos 账户转换为本地账户。比如第一行将 nn\xx@EXAMPLE.COM 转换为 hdfs 账户</description>
    </property>

    <!-- HIVE KERBEROS -->
    <property>
        <name>hadoop.proxyuser.hive.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hive.groups</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.HTTP.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.HTTP.groups</name>
        <value>*</value>
    </property>
  
</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/nn</value>
        <description>Path on the local filesystem where the NameNode stores the
            namespace and transactions logs persistently.
        </description>
    </property>

    <property>
        <name>dfs.namenode.hosts</name>
        <value>bigdata1.example.com,bigdata2.example.com</value>
        <description>List of permitted DataNodes.</description>
    </property>

    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
        <description></description>
    </property>
    
    <property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
        <description></description>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/dn</value>
    </property>
  
      <property>
        <name>dfs.permissions.supergroup</name>
        <value>hdfs</value>
    </property>

    <property>
        <name>dfs.http.policy</name>
        <value>HTTPS_ONLY</value>
        <description>所有开启的web页面均使用https, 细节在ssl server 和client那个配置文件内配置</description>
    </property>

    <property>
        <name>dfs.data.transfer.protection</name>
        <value>integrity</value>
    </property>
    <property>
        <name>dfs.https.port</name>
        <value>50470</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>
  
    <!-- 以下是 Kerberos 相关配置 -->
    <!-- 配置 Kerberos 认证后,这个配置是必须为 true。否重 Datanode 启动会报错 -->
    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
    </property>

    <!-- NameNode security config -->
    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>nn/_HOST@EXAMPLE.COM</value>
        <description>namenode对应的kerberos账户为 nn/主机名@EXAMPLE.COM, _HOST会自动转换为主机名
        </description>
    </property>
    <property>
        <name>dfs.namenode.keytab.file</name>
        <!-- path to the HDFS keytab -->
        <value>/etc/security/keytabs/nn.service.keytab</value>
        <description>指定namenode用于免密登录的keytab文件</description>
    </property>

    <property>
        <name>dfs.namenode.kerberos.internal.spnego.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>https 相关(如开启namenodeUI)使用的账户</description>
    </property>
    <!--Secondary NameNode security config -->
    <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>sn/_HOST@EXAMPLE.COM</value>
        <description>secondarynamenode使用的账户</description>
    </property>
    <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <!-- path to the HDFS keytab -->
        <value>/etc/security/keytabs/sn.service.keytab</value>
        <description>sn对应的keytab文件</description>
    </property>
    <property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>sn需要开启http页面用到的账户</description>
    </property>
    <!-- DataNode security config -->
    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>dn/_HOST@EXAMPLE.COM</value>
        <description>datanode用到的账户</description>
    </property>
    <property>
        <name>dfs.datanode.keytab.file</name>
        <!-- path to the HDFS keytab -->
        <value>/etc/security/keytabs/dn.service.keytab</value>
        <description>datanode用到的keytab文件路径</description>
    </property>

    <property>
        <name>dfs.web.authentication.kerberos.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
        <description>web hdfs 使用的账户</description>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.keytab</name>
        <value>/etc/security/keytabs/spnego.service.keytab</value>
        <description>对应的keytab文件</description>
    </property>

</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
yarn-site.xml
<?xml version="1.0"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <property>
        <name>yarn.log.server.url</name>
        <value>https://bigdata0.example.com:19890/jobhistory/logs</value>
        <description></description>
    </property>
    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.acl.enable</name>
        <value>false</value>
        <description>Enable ACLs? Defaults to false.</description>
    </property>
    <property>
        <name>yarn.admin.acl</name>
        <value>*</value>
        <description>ACL to set admins on the cluster. ACLs are of for comma-separated-
            usersspacecomma-separated-groups. Defaults to special value of * which means anyone.
            Special value of just space means no one has access.
        </description>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
        <description>Configuration to enable or disable log aggregation</description>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/tmp/logs</value>
        <description>Configuration to enable or disable log aggregation</description>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>bigdata0.example.com</value>
        <description></description>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
        <description></description>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/data/nm-local</value>
        <description>Comma-separated list of paths on the local filesystem where
            intermediate data is written.
        </description>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/data/nm-log</value>
        <description>Comma-separated list of paths on the local filesystem where logs are
            written.
        </description>
    </property>
    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
        <description>Default time (in seconds) to retain log files on the NodeManager Only
            applicable if log-aggregation is disabled.
        </description>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
        <description>Shuffle service that needs to be set for Map Reduce applications.
        </description>
    </property>
  
      <!-- To enable SSL -->
    <property>
        <name>yarn.http.policy</name>
        <value>HTTPS_ONLY</value>
    </property>

    <property>
        <name>yarn.nodemanager.linux-container-executor.group</name>
        <value>hadoop</value>
    </property>

    <!-- 这个配了可能需要在本机编译 ContainerExecutor -->
    <!-- 可以用以下命令检查环境。 -->
    <!-- hadoop checknative -a -->
    <!-- ldd $HADOOP_HOME/bin/container-executor -->
    <!-- 
    <property>
        <name>yarn.nodemanager.container-executor.class</name>
        <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
    </property>

    <property>
        <name>yarn.nodemanager.linux-container-executor.path</name>
        <value>/bigdata/hadoop-3.3.2/bin/container-executor</value>
    </property>
    -->
  
    <!-- 以下是 Kerberos 相关配置 -->
    <!-- ResourceManager security configs -->
    <property>
        <name>yarn.resourcemanager.principal</name>
        <value>rm/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.resourcemanager.keytab</name>
        <value>/etc/security/keytabs/rm.service.keytab</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>
        <value>true</value>
    </property>
    <!-- NodeManager security configs -->
    <property>
        <name>yarn.nodemanager.principal</name>
        <value>nm/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.nodemanager.keytab</name>
        <value>/etc/security/keytabs/nm.service.keytab</value>
    </property>
    <!-- TimeLine security configs -->
    <property>
        <name>yarn.timeline-service.principal</name>
        <value>tl/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.timeline-service.keytab</name>
        <value>/etc/security/keytabs/tl.service.keytab</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-authentication.type</name>
        <value>kerberos</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-authentication.kerberos.principal</name>
        <value>HTTP/_HOST@EXAMPLE.COM</value>
    </property>
    <property>
        <name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
        <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
       <description></description>
    </property>
    <property>
     <name>mapreduce.jobhistory.address</name>
     <value>bigdata0.example.com:10020</value>
     <description></description>
    </property>
    <property>
     <name>mapreduce.jobhistory.webapp.address</name>
     <value>bigdata0.example.com:19888</value>
     <description></description>
    </property>
    <property>
     <name>mapreduce.jobhistory.intermediate-done-dir</name>
     <value>/mr-data/mr-history/tmp</value>
     <description></description>
    </property>
    <property>
     <name>mapreduce.jobhistory.done-dir</name> 
     <value>/mr-data/mr-history/done</value>
     <description></description>
    </property>
  
     <property>
       <name>mapreduce.jobhistory.http.policy</name>
       <value>HTTPS_ONLY</value>
     </property>
  
      <!-- 以下是 Kerberos 相关配置 -->
     <property>
       <name>mapreduce.jobhistory.keytab</name>
       <value>/etc/security/keytabs/jhs.service.keytab</value>
     </property>
     <property>
       <name>mapreduce.jobhistory.principal</name>
       <value>jhs/_HOST@EXAMPLE.COM</value>
     </property>
     <property>
       <name>mapreduce.jobhistory.webapp.spnego-principal</name>
       <value>HTTP/_HOST@EXAMPLE.COM</value>
     </property>
     <property>
       <name>mapreduce.jobhistory.webapp.spnego-keytab-file</name>
       <value>/etc/security/keytabs/spnego.service.keytab</value>
     </property>

</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
创建https证书
openssl req -new -x509 -keyout bd_ca_key -out bd_ca_cert -days 9999 -subj '/C=CN/ST=beijing/L=beijing/O=test/OU=test/CN=test'

  • 1
  • 2
scp -r /etc/security/cdh.https bigdata1:/etc/security/
scp -r /etc/security/cdh.https bigdata2:/etc/security/

  • 1
  • 2
  • 3
[root@bigdata0 cdh.https]# openssl req -new -x509 -keyout bd_ca_key -out bd_ca_cert -days 9999 -subj '/C=CN/ST=beijing/L=beijing/O=test/OU=test/CN=test'
Generating a 2048 bit RSA private key
..............+++
..........................+++
writing new private key to 'bd_ca_key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
[root@bigdata0 cdh.https]#
[root@bigdata0 cdh.https]# ll
总用量 8
-rw-r--r--. 1 root root 1298 10月  5 17:14 bd_ca_cert
-rw-r--r--. 1 root root 1834 10月  5 17:14 bd_ca_key
[root@bigdata0 cdh.https]# scp -r /etc/security/cdh.https bigdata1:/etc/security/
bd_ca_key                                                                             100% 1834   913.9KB/s   00:00
bd_ca_cert                                                                            100% 1298     1.3MB/s   00:00
[root@bigdata0 cdh.https]# scp -r /etc/security/cdh.https bigdata2:/etc/security/
bd_ca_key                                                                             100% 1834     1.7MB/s   00:00
bd_ca_cert                                                                            100% 1298     1.3MB/s   00:00
[root@bigdata0 cdh.https]#


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

在三个节点依次执行

cd /etc/security/cdh.https
 
# 所有需要输入密码的地方全部输入123456(方便起见,如果你对密码有要求请自行修改)
 
# 1  输入密码和确认密码:123456,此命令成功后输出keystore文件
keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=test, OU=test, O=test, L=beijing, ST=beijing, C=CN"
 
# 2 输入密码和确认密码:123456,提示是否信任证书:输入yes,此命令成功后输出truststore文件
keytool -keystore truststore -alias CARoot -import -file bd_ca_cert
 
# 3 输入密码和确认密码:123456,此命令成功后输出cert文件
keytool -certreq -alias localhost -keystore keystore -file cert
 
# 4 此命令成功后输出cert_signed文件
openssl x509 -req -CA bd_ca_cert -CAkey bd_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:123456
 
# 5 输入密码和确认密码:123456,是否信任证书,输入yes,此命令成功后更新keystore文件
keytool -keystore keystore -alias CARoot -import -file bd_ca_cert

# 6 输入密码和确认密码:123456
keytool -keystore keystore -alias localhost -import -file cert_signed
 
 
最终得到:
-rw-r--r-- 1 root root 1294 Sep 26 11:31 bd_ca_cert
-rw-r--r-- 1 root root   17 Sep 26 11:36 bd_ca_cert.srl
-rw-r--r-- 1 root root 1834 Sep 26 11:31 bd_ca_key
-rw-r--r-- 1 root root 1081 Sep 26 11:36 cert
-rw-r--r-- 1 root root 1176 Sep 26 11:36 cert_signed
-rw-r--r-- 1 root root 4055 Sep 26 11:37 keystore
-rw-r--r-- 1 root root  978 Sep 26 11:35 truststore

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

配置ssl-server.xml和ssl-client.xml

ssl-sserver.xml

<configuration>
    <property>
        <name>ssl.server.truststore.location</name>
        <value>/etc/security/cdh.https/truststore</value>
        <description>Truststore to be used by NN and DN. Must be specified.
        </description>
    </property>

    <property>
        <name>ssl.server.truststore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "".
        </description>
    </property>

    <property>
        <name>ssl.server.truststore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".
        </description>
    </property>

    <property>
        <name>ssl.server.truststore.reload.interval</name>
        <value>10000</value>
        <description>Truststore reload check interval, in milliseconds.
        Default value is 10000 (10 seconds).
        </description>
    </property>

    <property>
        <name>ssl.server.keystore.location</name>
        <value>/etc/security/cdh.https/keystore</value>
        <description>Keystore to be used by NN and DN. Must be specified.
        </description>
    </property>

    <property>
        <name>ssl.server.keystore.password</name>
        <value>123456</value>
        <description>Must be specified.
        </description>
    </property>

    <property>
        <name>ssl.server.keystore.keypassword</name>
        <value>123456</value>
        <description>Must be specified.
        </description>
    </property>

    <property>
        <name>ssl.server.keystore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".
        </description>
    </property>

    <property>
        <name>ssl.server.exclude.cipher.list</name>
        <value>TLS_EE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
        SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
        SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
        SSL_RSA_WITH_RC4_128_MD5</value>
        <description>Optional. The weak security cipher suites that you want excluded
        from SSL communication.</description>
    </property>

</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70

ssl-client.xml

<configuration>
    <property>
        <name>ssl.client.truststore.location</name>
        <value>/etc/security/cdh.https/truststore</value>
        <description>Truststore to be used by clients like distcp. Must be
        specified.
        </description>
    </property>

    <property>
        <name>ssl.client.truststore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "".
        </description>
    </property>

    <property>
        <name>ssl.client.truststore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".
        </description>
    </property>

    <property>
        <name>ssl.client.truststore.reload.interval</name>
        <value>10000</value>
        <description>Truststore reload check interval, in milliseconds.
        Default value is 10000 (10 seconds).
        </description>
    </property>

    <property>
        <name>ssl.client.keystore.location</name>
        <value>/etc/security/cdh.https/keystore</value>
        <description>Keystore to be used by clients like distcp. Must be
        specified.
        </description>
    </property>

    <property>
        <name>ssl.client.keystore.password</name>
        <value>123456</value>
        <description>Optional. Default value is "".
        </description>
    </property>

    <property>
        <name>ssl.client.keystore.keypassword</name>
        <value>123456</value>
        <description>Optional. Default value is "".
        </description>
    </property>

    <property>
        <name>ssl.client.keystore.type</name>
        <value>jks</value>
        <description>Optional. The keystore file format, default value is "jks".
        </description>
    </property>

</configuration>

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62

分发配置

scp $HADOOP_HOME/etc/hadoop/* bigdata1:/bigdata/hadoop-3.3.2/etc/hadoop/
scp $HADOOP_HOME/etc/hadoop/* bigdata2:/bigdata/hadoop-3.3.2/etc/hadoop/

  • 1
  • 2
  • 3

4、启动服务

ps:初始化 namenode 后可以直接 sbin/start-all.sh

5、一些错误:

5.1 Kerberos 相关错误
连不上 realm

Cannot contact any KDC for realm

一般是网络不通

1、可能是没关闭防火墙:

[root@bigdata0 bigdata]# kinit krbtest/admin@EXAMPLE.COM
kinit: Cannot contact any KDC for realm 'EXAMPLE.COM' while getting initial credentials
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]# systemctl stop firewalld.service
[root@bigdata0 bigdata]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@bigdata0 bigdata]# kinit krbtest/admin@EXAMPLE.COM
Password for krbtest/admin@EXAMPLE.COM:
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]#
[root@bigdata0 bigdata]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: krbtest/admin@EXAMPLE.COM

Valid starting       Expires              Service principal
2023-10-05T16:19:58  2023-10-06T16:19:58  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@bigdata0 bigdata]#

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

2、可能是 没有在 hosts 中配置 kdc

编辑 /etc/hosts 添加 kdc 和对应 ip 的映射

# {kdc ip}     {kdc}
192.168.50.10  bigdata3.example.com

  • 1
  • 2
  • 3

https://serverfault.com/questions/612869/kinit-cannot-contact-any-kdc-for-realm-ubuntu-while-getting-initial-credentia
在这里插入图片描述

kinit 命令找错依赖

kinit: relocation error: kinit: symbol krb5_get_init_creds_opt_set_pac_request, version krb5_3_MIT not defined in file libkrb5.so.3 with link time reference

用 ldd

(

w

h

i

c

h

k

i

n

i

t

)

发现指向了非

/

l

i

b

64

下面的

l

i

b

k

r

b

s

o

.

3

依赖。执行

e

x

p

o

r

t

L

D

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

2869/kinit-cannot-contact-any-kdc-for-realm-ubuntu-while-getting-initial-credentia>
在这里插入图片描述

kinit 命令找错依赖

kinit: relocation error: kinit: symbol krb5_get_init_creds_opt_set_pac_request, version krb5_3_MIT not defined in file libkrb5.so.3 with link time reference

用 ldd

(

w

h

i

c

h

k

i

n

i

t

)

发现指向了非

/

l

i

b

64

下面的

l

i

b

k

r

b

s

o

.

3

依赖。执行

e

x

p

o

r

t

L

D

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)
[外链图片转存中…(img-7rdljSse-1713128697943)]

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/710911
推荐阅读
相关标签
  

闽ICP备14008679号