赞
踩
Kerberos是一种计算机网络认证协议,它允许某实体在非安全网络环境下通信,向另一个实体以一种安全的方式证明自己的身份,协议基于对称密码学,并需要一个值得信赖的第三方(KDC)。Kerberos 是第三方认证机制,其中用户和服务依赖于第三方(Kerberos 服务器)来对彼此进行身份验证。 Kerberos服务器本身称为密钥分发中心或 KDC。
它主要包含:认证服务器(AS)和票据授权服务器(TGS)组成的密钥分发中心(KDC),以及提供特定服务的SS。相关概念描述:
KDC持有一个密钥数据库;每个网络实体——无论是客户还是服务器——共享一套只有他自己和KDC知道的密钥。密钥的内容用于证明实体的身份。对于两个实体间的通信,KDC产生一个会话密钥,用来加密他们之间的交互信息。
以平时坐火车举例:
以上来自文章Kerberos协议及Habase相关参数与kerberos入坑指南
所用机器节点名为node1
yum -y install krb5-server krb5-libs krb5-workstation
执行完命令后,会生成kerberos配置文件,krb5.conf和kdc.conf等
主要文件路径分别为:
安装情况如图:
cat /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
max_life = 1d
max_renewable_life = 7d
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
注:其上配置中的aes256-cts:normal这个算法需要额外的jar包支持,可以去掉
配置说明:
[kdcdefaults] #该部分包含在此文件中列出的所有通用的配置。 kdc_ports = 88 #指定KDC的默认端口 kdc_tcp_ports = 88 # 指定KDC的TCP协议默认端口。 [realms] #该部分列出每个领域的配置。 HADOOP.COM = {#是设定的 realms。名字随意,推荐为大写!,但须与/etc/krb5.conf保持一致。Kerberos 可以支持多个 realms,会增加复杂度。大小写敏感。 #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl #标注了 admin 的用户权限的文件,若文件不存在,需要用户自己创建。即该参数允许为具有对Kerberos数据库的管理访问权限的UPN指定ACL。 dict_file = /usr/share/dict/words #该参数指向包含潜在可猜测或可破解密码的文件。 admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab #KDC 进行校验的 keytab。 max_life #该参数指定如果指定为2天。这是票据的最长存活时间。 max_renewable_life #该参数指定在多长时间内可重获取票据。 supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal #指定此KDC支持的各种加密类型。 }
cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@HADOOP.COM *
配置说明:
上述参数只有两列,第一列为用户名,第二列为权限分配。
*/admin@HADOOP.COM #表示以"/admin@HADOOP.COM"结尾的用户。
* #表示UNP可以执行任何操作,因为权限为所有权限,
权限可选择的配置列表如下:
a: 允许增加principal或访问策略 A: 不允许增加principal或访问策略 c: 允许变更principals的密码 C: 不允许变更princials的密码 d: 允许删除principals或策略 D: 不允许删除principals或策略 i: 允许查看数据库 I: 不允许查看数据库 l: 允许列出principals或策略列表 L: 不允许列出principals或策略 m: 允许修改principals或策略 M: 不允许修改principals或策略 p: 允许传播(propagation)principal数据库 P: 不允许传播principal数据库 s: 允许显式设置principals秘钥 S: 不允许显式设置principals秘钥 x: admcilsp的缩写。所有特权 *: 跟x一样
下面举例说明:
root/admin@HADOOP.COM *
===说明:root/admin可以在kadmin里做任何事===
xianglei/admin@HADOOP.COM aml
===说明:root/admin可以在kadmin里增加,修改,查看principals列表,===
===但不能删除,传播,查看数据库内容,变更密码等操作===
list/admin@HADOOP.COM l
===说明:list/admin用户只能看,别的啥也不能干===
包含KDC的位置,Kerberos的admin的realms 等。需要所有使用的Kerberos的机器上的配置文件都同步。
cat /etc/krb5.conf
# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt default_realm = HADOOP.COM udp_preference_limit = 1 #禁止使用 udp 可以防止一个 Hadoop 中的错误 # default_ccache_name = KEYRING:persistent:%{uid} [realms] HADOOP.COM = { kdc = node1 #节点主机名 admin_server = node1 #节点主机名 default_domain = HADOOP.COM } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM
配置说明:
[logging]: #Kerberos守护进程的日志记录方式。换句话说,表示 server 端的日志的打印位置。 default :默认的krb5libs.log日志文件存放路径 kdc :默认的krb5kdc.log日志文件存放路径 admin_server :默认的kadmind.log日志文件存放路径 [libdefaults]: #Kerberos使用的默认值,当进行身份验证而未指定Kerberos域时,则使用default_realm参数指定的Kerberos域。即每种连接的默认配置,需要注意以下几个关键的配置: dns_lookup_realm :DNS查找域名,我们可以理解为DNS的正向解析,该功能我没有去验证过,默认禁用。(我猜测该功能和domain_realm配置有关) ticket_lifetime :凭证生效的时限,设置为7天。 rdns :我理解是和dns_lookup_realm相反,即反向解析技术,该功能我也没有去验证过,默认禁用即可。(我猜测该功能和domain_realm配置有关) pkinit_anchors :在KDC中配置pkinit的位置,该参数的具体功能我没有做进一步验证。 default_realm = EXAMPLE.COM :设置 Kerberos 应用程序的默认领域。如果您有多个领域,只需向 [realms] 节添加其他的语句。其中EXAMPLE.COM可以为任意名字,推荐为大写。必须跟要配置的realm的名称一致。 default_ccache_name: :顾名思义,默认的缓存名称,不推荐使用该参数。 renew_lifetime :凭证最长可以被延期的时限,一般为7天。当凭证过期之后,对安全认证的服务的后续访问则会失败。 forwardable :如果此参数被设置为true,则可以转发票据,这意味着如果具有TGT的用户登陆到远程系统,则KDC可以颁发新的TGT,而不需要用户再次进行身份验证。 renewable :是否允许票据延迟 [realms]: #域特定的信息,例如域的Kerberos服务器的位置。可能有几个,每个域一个。可以为KDC和管理服务器指定一个端口。如果没有配置,则KDC使用端口88,管理服务器使用749。即列举使用的 realm域。 kdc :代表要KDC的位置。格式是 机器:端口 admin_server :代表admin的位置。格式是 机器:端口 default_domain :顾名思义,指定默认的域名。 [domain_realm]: #指定DNS域名和Kerberos域名之间映射关系。指定服务器的FQDN,对应的domain_realm值决定了主机所属的域。 [kdc]: #kdc的配置信息。即指定kdc.conf的位置。 profile :kdc的配置文件路径,默认值下若无文件则需要创建。
kdb5_util create -r HADOOP.COM -s
[root@node1 etc]# kdb5_util create -r HADOOP.COM -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/M@HADOOP.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: #【1】
Re-enter KDC database master key to verify:
说明:
kerberos数据库创建完之后,默认会创建以下5个文件(kdc.conf、kadm5.acl除外的其他几个)
[root@node1 etc]# ls /var/kerberos/krb5kdc/
kadm5.acl kdc.conf principal principal.kadm5 principal.kadm5.lock principal.ok
[root@node1 etc]# ll -a /var/kerberos/krb5kdc/
总用量 28
drwxr-xr-x. 2 root root 149 12月 14 14:11 .
drwxr-xr-x. 4 root root 33 12月 14 13:51 ..
-rw-------. 1 root root 75 12月 14 14:11 .k5.HADOOP.COM
-rw-------. 1 root root 21 12月 14 13:57 kadm5.acl
-rw-------. 1 root root 492 12月 14 14:06 kdc.conf
-rw-------. 1 root root 8192 12月 14 14:11 principal
-rw-------. 1 root root 8192 12月 14 14:11 principal.kadm5
-rw-------. 1 root root 0 12月 14 14:11 principal.kadm5.lock
-rw-------. 1 root root 0 12月 14 14:11 principal.ok
[root@node1 etc]#
[root@node1 etc]# systemctl status krb5kdc ● krb5kdc.service - Kerberos 5 KDC Loaded: loaded (/usr/lib/systemd/system/krb5kdc.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@node1 etc]# systemctl start krb5kdc [root@node1 etc]# systemctl status krb5kdc ● krb5kdc.service - Kerberos 5 KDC Loaded: loaded (/usr/lib/systemd/system/krb5kdc.service; disabled; vendor preset: disabled) Active: active (running) since 一 2020-12-14 14:20:53 CST; 2s ago Process: 2564 ExecStart=/usr/sbin/krb5kdc -P /var/run/krb5kdc.pid $KRB5KDC_ARGS (code=exited, status=0/SUCCESS) Main PID: 2566 (krb5kdc) Tasks: 1 CGroup: /system.slice/krb5kdc.service └─2566 /usr/sbin/krb5kdc -P /var/run/krb5kdc.pid 12月 14 14:20:53 node1 systemd[1]: Starting Kerberos 5 KDC... 12月 14 14:20:53 node1 systemd[1]: Can't open PID file /var/run/krb5kdc.pid (yet?) after start: No such file or directory 12月 14 14:20:53 node1 systemd[1]: Started Kerberos 5 KDC. [root@node1 etc]#
[root@node1 etc]# systemctl status kadmin ● kadmin.service - Kerberos 5 Password-changing and Administration Loaded: loaded (/usr/lib/systemd/system/kadmin.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@node1 etc]# systemctl start kadmin [root@node1 etc]# systemctl status kadmin ● kadmin.service - Kerberos 5 Password-changing and Administration Loaded: loaded (/usr/lib/systemd/system/kadmin.service; disabled; vendor preset: disabled) Active: active (running) since 一 2020-12-14 14:22:38 CST; 1s ago Process: 2594 ExecStart=/usr/sbin/_kadmind -P /var/run/kadmind.pid $KADMIND_ARGS (code=exited, status=0/SUCCESS) Main PID: 2596 (kadmind) Tasks: 1 CGroup: /system.slice/kadmin.service └─2596 /usr/sbin/kadmind -P /var/run/kadmind.pid 12月 14 14:22:38 node1 systemd[1]: Starting Kerberos 5 Password-changing and Administration... 12月 14 14:22:38 node1 systemd[1]: Can't open PID file /var/run/kadmind.pid (yet?) after start: No such file or directory 12月 14 14:22:38 node1 systemd[1]: Started Kerberos 5 Password-changing and Administration. [root@node1 etc]#
[root@node1 etc]# cat /var/kerberos/krb5kdc/kadm5.acl */admin@HADOOP.COM * [root@node1 etc]# kadmin.local Authenticating as principal root/admin@HADOOP.COM with password. kadmin.local: addprinc root/admin WARNING: no policy specified for root/admin@HADOOP.COM; defaulting to no policy Enter password for principal "root/admin@HADOOP.COM": Re-enter password for principal "root/admin@HADOOP.COM": Principal "root/admin@HADOOP.COM" created. kadmin.local: kadmin.local: listprincs K/M@HADOOP.COM kadmin/admin@HADOOP.COM kadmin/changepw@HADOOP.COM kadmin/node1@HADOOP.COM kiprop/node1@HADOOP.COM krbtgt/HADOOP.COM@HADOOP.COM root/admin@HADOOP.COM kadmin.local: q [root@node1 etc]#
我们为KDC添加一个管理员用户,关于管理员规则我们已经在"/var/kerberos/krb5kdc/kadm5.acl"中定义。
在另一台节点(node2)上:
yum install -y krb5-lib krb5-workstation
将服务端的配置文件拷贝到客户端,在服务端的节点上执行下列命令:
scp /etc/krb5.conf node2:/etc/
客户端配置文件和服务段同步后,进行登陆,验证是否可以成功登陆:
[root@node2~]# kinit root/admin
Password for root/admin@EXAMPLE.COM:
[root@node2~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@EXAMPLE.COM
Valid starting Expires Service principal
2020-12-14T14:35:30 2020-12-15T14:35:32 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 2020-12-21T14:35:30
[root@node2~]#
[root@node2~]# kinit root/admin
Password for root/admin@EXAMPLE.COM:
kinit: Password incorrect while getting initial credentials
[root@node2~]#
#查看kdc状态,可以发现处于dead状态 [root@node1 etc]# systemctl status krb5kdc ● krb5kdc.service - Kerberos 5 KDC Loaded: loaded (/usr/lib/systemd/system/krb5kdc.service; disabled; vendor preset: disabled) Active: inactive (dead) #启动kdc [root@node1 etc]# systemctl start krb5kdc #再次查看kdc的状态,可以发现处于running状态 [root@node1 etc]# systemctl status krb5kdc ● krb5kdc.service - Kerberos 5 KDC Loaded: loaded (/usr/lib/systemd/system/krb5kdc.service; disabled; vendor preset: disabled) Active: active (running) since 一 2020-12-14 14:20:53 CST; 2s ago Process: 2564 ExecStart=/usr/sbin/krb5kdc -P /var/run/krb5kdc.pid $KRB5KDC_ARGS (code=exited, status=0/SUCCESS) Main PID: 2566 (krb5kdc) Tasks: 1 CGroup: /system.slice/krb5kdc.service └─2566 /usr/sbin/krb5kdc -P /var/run/krb5kdc.pid 12月 14 14:20:53 node1 systemd[1]: Starting Kerberos 5 KDC... 12月 14 14:20:53 node1 systemd[1]: Can't open PID file /var/run/krb5kdc.pid (yet?) after start: No such file or directory 12月 14 14:20:53 node1 systemd[1]: Started Kerberos 5 KDC. [root@node1 etc]#
#查看kadmin状态,可以发现处于dead状态 [root@node1 etc]# systemctl status kadmin ● kadmin.service - Kerberos 5 Password-changing and Administration Loaded: loaded (/usr/lib/systemd/system/kadmin.service; disabled; vendor preset: disabled) Active: inactive (dead) #启动kadmin [root@node1 etc]# systemctl start kadmin #再次查看kadmin的状态,可以发现处于running状态 [root@node1 etc]# systemctl status kadmin ● kadmin.service - Kerberos 5 Password-changing and Administration Loaded: loaded (/usr/lib/systemd/system/kadmin.service; disabled; vendor preset: disabled) Active: active (running) since 一 2020-12-14 14:22:38 CST; 1s ago Process: 2594 ExecStart=/usr/sbin/_kadmind -P /var/run/kadmind.pid $KADMIND_ARGS (code=exited, status=0/SUCCESS) Main PID: 2596 (kadmind) Tasks: 1 CGroup: /system.slice/kadmin.service └─2596 /usr/sbin/kadmind -P /var/run/kadmind.pid 12月 14 14:22:38 node1 systemd[1]: Starting Kerberos 5 Password-changing and Administration... 12月 14 14:22:38 node1 systemd[1]: Can't open PID file /var/run/kadmind.pid (yet?) after start: No such file or directory 12月 14 14:22:38 node1 systemd[1]: Started Kerberos 5 Password-changing and Administration. [root@node1 etc]#
可以使用kadmin.local或kadmin,至于使用哪个,取决于账户和访问权限:kadmin.local(on the KDC machine)or kadmin (from any machine)
kadmin需要用kadmin -p root/admin 形式来指定用户:
#hadoop/node1用户进行认证
[root@node1 bin]# kadmin -p hadoop/node1
Authenticating as principal hadoop/node1 with password.
Password for hadoop/node1@HADOOP.COM:
#查看现有凭证,发现没权限
kadmin: listprincs
get_principals: Operation requires ``list'' privilege while retrieving list.
kadmin:
改用root/admin用户登录:
[root@node1 bin]# kadmin -p root/admin Authenticating as principal root/admin with password. Password for root/admin@HADOOP.COM: #查看现有凭证,有权限 kadmin: listprincs K/M@HADOOP.COM hadoop/node1@HADOOP.COM http/node1@HADOOP.COM kadmin/admin@HADOOP.COM kadmin/changepw@HADOOP.COM kadmin/node1@HADOOP.COM kiprop/node1@HADOOP.COM krbtgt/HADOOP.COM@HADOOP.COM root/admin@HADOOP.COM root/node1@HADOOP.COM zkcli/node1@HADOOP.COM zookeeper/node1@HADOOP.COM kadmin:
这里用kadmin.local命令行登录,并打印出了可用的各类命令:
#登录本地管理员模式 [root@node1 etc]# kadmin.local Authenticating as principal root/admin@HADOOP.COM with password. #查看帮助信息 kadmin.local: ? Available kadmin.local requests: add_principal, addprinc, ank Add principal delete_principal, delprinc Delete principal modify_principal, modprinc Modify principal rename_principal, renprinc Rename principal change_password, cpw Change password get_principal, getprinc Get principal list_principals, listprincs, get_principals, getprincs List principals add_policy, addpol Add policy modify_policy, modpol Modify policy delete_policy, delpol Delete policy get_policy, getpol Get policy list_policies, listpols, get_policies, getpols List policies get_privs, getprivs Get privileges ktadd, xst Add entry(s) to a keytab ktremove, ktrem Remove entry(s) from a keytab lock Lock database exclusively (use with extreme caution!) unlock Release exclusive database lock purgekeys Purge previously retained old keys from a principal get_strings, getstrs Show string attributes on a principal set_string, setstr Set a string attribute on a principal del_string, delstr Delete a string attribute on a principal list_requests, lr, ? List available requests. quit, exit, q Exit program. kadmin.local:
kadmin.local: listprincs
K/M@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/node1@HADOOP.COM
kiprop/node1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
root/admin@HADOOP.COM
kadmin.local:
分为随机与密码两种形式创建,例如
密码创建也可以用addprinc hdfs/node1命令后再按提示输入密码
kadmin.local: addprinc hdfs/node1
WARNING: no policy specified for hdfs/node1@HADOOP.COM; defaulting to no policy
Enter password for principal "hdfs/node1@HADOOP.COM":
Re-enter password for principal "hdfs/node1@HADOOP.COM":
Principal "hdfs/node1@HADOOP.COM" created.
kadmin.local: listprincs
K/M@HADOOP.COM
hdfs/node1@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/node1@HADOOP.COM
kiprop/node1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
root/admin@HADOOP.COM
kadmin.local:
注意:
这里生成的hdfs/node1是指定key,如果通过ktadd -k /root/node1.keytab hdfs/node1导出密钥后,创建时指定的key失效。 原因:每次生成秘钥文件时,密码可能会进行随机改变,后续再通过指定key进行认证时可能会报 kinit: Password incorrect while getting initial credentials 解决方案:生成密钥时,添加"-norandkey"即可解决问题
使用xst命令或者ktadd命令:
ktadd -norandkey -k /root/node1.keytab hdfs/node1
或xst -k /root/node1.keytab hdfs/node1
或xst -norandkey -k /root/node1.keytab hdfs/node1 jack/node1
其中 /root/node1.keytab为自己指定的路径与文件名,以.keytab结尾;hdfs/node1为之前创建的凭证用户
#查看现有凭证/用户 kadmin.local: listprincs K/M@HADOOP.COM hdfs/node1@HADOOP.COM kadmin/admin@HADOOP.COM kadmin/changepw@HADOOP.COM kadmin/node1@HADOOP.COM kiprop/node1@HADOOP.COM krbtgt/HADOOP.COM@HADOOP.COM root/admin@HADOOP.COM #导出hdfs/node1凭证到/usr/data/kerberos/keytab目录 #下的node1.keytab文件中 kadmin.local: ktadd -norandkey -k /usr/data/kerberos/keytab/node1.keytab hdfs/node1 Entry for principal hdfs/node1 with kvno 1, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type des3-cbc-sha1 added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type arcfour-hmac added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type camellia256-cts-cmac added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type camellia128-cts-cmac added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type des-hmac-sha1 added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. Entry for principal hdfs/node1 with kvno 1, encryption type des-cbc-md5 added to keytab WRFILE:/usr/data/kerberos/keytab/node1.keytab. kadmin.local:
[root@node1 keytab]# cd /usr/data/kerberos/keytab/ [root@node1 keytab]# ll 总用量 4 -rw-------. 1 root root 538 12月 14 15:20 node1.keytab [root@node1 keytab]# #查看密钥文件内容 [root@node1 keytab]# klist -kt node1.keytab Keytab name: FILE:node1.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM 1 2020-12-14T15:20:27 hdfs/node1@HADOOP.COM [root@node1 keytab]#
若没有登录,则如下:
[root@node1 keytab]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
[root@node1 keytab]#
若有登录,则如下:
[root@node1~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@HADOOP.COM
Valid starting Expires Service principal
2020-12-14T14:35:30 2020-12-15T14:35:32 krbtgt/HADOOP.COM@HADOOP.COM
renew until 2020-12-21T14:35:30
[root@node1~]#
命令为kdestroy
#查看当前的登录状态,可以看到是已经登录了
[root@node1~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@HADOOP.COM
Valid starting Expires Service principal
2020-12-14T14:35:30 2020-12-15T14:35:32 krbtgt/HADOOP.COM@HADOOP.COM
renew until 2020-12-21T14:35:30
#删除当前的认证的缓存,可以理解为退出登录
[root@node1~]# kdestroy
#再次查看当前登录状态,可以看到目前没有登录
[root@node1~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
[root@node1~]#
#查看当前登录状态,可以看到目前没有登录
[root@node2~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
#密码登录
[root@node2~]# kinit root/admin
Password for root/admin@HADOOP.COM:
#再次查看当前登录状态,可以看到登录成功
[root@node2~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@HADOOP.COM
Valid starting Expires Service principal
2020-12-14T15:30:27 2020-12-15T15:30:26 krbtgt/HADOOP.COM@HADOOP.COM
renew until 2020-12-21T15:30:26
[root@node2~]#
#查看当前登录状态,可以看到目前没有登录
[root@node2 keytab]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
#密钥文件登录
[root@node2 keytab]# kinit -kt /usr/data/kerberos/keytab/node1.keytab hdfs/node1
#再次查看当前登录状态,可以看到登录成功
[root@node2 keytab]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs/node1@HADOOP.COM
Valid starting Expires Service principal
2020-12-14T15:33:57 2020-12-15T15:33:57 krbtgt/HADOOP.COM@HADOOP.COM
renew until 2020-12-21T15:33:57
[root@node2 keytab]#
#在知道原密码的情况下修改密码
[root@node1 keytab]# kpasswd hdfs/node1
#输入旧密码
Password for hdfs/node1@HADOOP.COM:
#输入新密码
Enter new password:
#重复输入新密码进行确认
Enter it again:
Password changed.
[root@node1 keytab]#
#进入本地管理员模式
[root@node1 keytab]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
#修改hdfs/node1的密码(无需知道原密码)
kadmin.local: change_password hdfs/node1
Enter password for principal "hdfs/node1@HADOOP.COM":
Re-enter password for principal "hdfs/node1@HADOOP.COM":
Password for "hdfs/node1@HADOOP.COM" changed.
kadmin.local:
#进入本地管理员模式 [root@node1 keytab]# kadmin.local Authenticating as principal root/admin@HADOOP.COM with password. #查看已有凭证 kadmin.local: listprincs K/M@HADOOP.COM hdfs/node1@HADOOP.COM kadmin/admin@HADOOP.COM kadmin/changepw@HADOOP.COM kadmin/node1@HADOOP.COM kiprop/node1@HADOOP.COM krbtgt/HADOOP.COM@HADOOP.COM root/admin@HADOOP.COM ##删除凭据 kadmin.local: delprinc hdfs/node1 Are you sure you want to delete the principal "hdfs/node1@HADOOP.COM"? (yes/no): yes Principal "hdfs/node1@HADOOP.COM" deleted. Make sure that you have removed this principal from all ACLs before reusing. #查看现有凭证 kadmin.local: listprincs K/M@HADOOP.COM kadmin/admin@HADOOP.COM kadmin/changepw@HADOOP.COM kadmin/node1@HADOOP.COM kiprop/node1@HADOOP.COM krbtgt/HADOOP.COM@HADOOP.COM root/admin@HADOOP.COM kadmin.local:
#进入本地管理员模式
[root@node1 keytab]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
#退出管理员模式(quit, exit, q命令都可以)
kadmin.local: q
[root@node1 keytab]#
#进入本地管理员模式 [root@node1 keytab]# kadmin.local Authenticating as principal root/admin@HADOOP.COM with password. #获取凭证信息 kadmin.local: getprinc hadoop/node1 Principal: hadoop/node1@HADOOP.COM Expiration date: [never] Last password change: 三 12月 23 16:00:53 CST 2020 Password expiration date: [never] Maximum ticket life: 4 days 00:00:00 Maximum renewable life: 7 days 00:00:00 Last modified: 三 12月 23 16:00:53 CST 2020 (root/admin@HADOOP.COM) Last successful authentication: [never] Last failed authentication: [never] Failed password attempts: 0 Number of keys: 8 Key: vno 3, aes256-cts-hmac-sha1-96 Key: vno 3, aes128-cts-hmac-sha1-96 Key: vno 3, des3-cbc-sha1 Key: vno 3, arcfour-hmac Key: vno 3, camellia256-cts-cmac Key: vno 3, camellia128-cts-cmac Key: vno 3, des-hmac-sha1 Key: vno 3, des-cbc-md5 MKey: vno 1 Attributes: Policy: [none] kadmin.local:
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。