点击上方“IT那活儿”公众号,关注后了解更多内容,不管IT什么活儿,干就完了!!!
我们平常会基于大数据组件来实现客户业务场景,而所使用大数据组件时(hadoop/flink/kafka等),会被安全厂商扫描出安全访问漏洞,业界推荐用kerberos来解决此类安全访问问题。
Kerberos是一种网络认证协议,在互不信任的网络中,Kerberos提供了一种可靠的中心化认证协议,以便网络中的各个机器之间能够相互访问。
服务端 | 192.168.199.102 | bigdata-03 | krb5-server krb5-workstation krb5-libs krb5-devel |
客户端 | 192.168.199.104 | bigdata-05 | krb5-workstation krb5-devel |
服务端与客户端主机网络互通,并且配置对应相互hostname映射关系。
rpm -qa|grep krb 查看当前服务器安装的包,安装如下对应的的安装包:
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM #默认领域,跟kdc.conf里面realms保持一致
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
HADOOP.COM = {
kdc = bigdata-03 #主节点hostname
admin_server = bigdata-03 #主节点hostname
}
[domain_realm]
.hadoop.com = HADOOP.COM #DNS域名,跟kdc.conf里面realms保持一致
hadoop.com = HADOOP.COM #DNS域名,跟kdc.conf里面realms保持一致
org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
*/admin@HADOOP.COM *
[root@bigdata-03 ~]# kdb5_util create -s -r HADOOP.COM
Loading random data
Initializing database /var/kerberos/krb5kdc/principal for realm HADOOP.COM,
master key name K/M@HADOOP.COM
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
kdb5_util: Cannot open DB2 database /var/kerberos/krb5kdc/principal: File exists while creating database /var/kerberos/krb5kdc/principal
[root@bigdata-03 ~]# rm -f /var/kerberos/krb5kdc/principal*
ls -a /var/kerberos/krb5kdc/
kadmin.local
listprincs
kadmin.local -q "addprinc admin/admin@HADOOP.COM"
addprinc admin/admin@HADOOP.COM
标准:account/instance@realm
例子:admin/admin@HADOOP.COM
realm 表示域名 如 HADOOP.COM
org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
libgssapi_krb5.so.2: cannot open shared object file: No such file or directory
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/129204.html
大数据开发系列五:kafka& zookeeper 配置kerberos认证 img{ display:block; margin:0 auto !important; width:100%; } body{ ...
摘要:一大数据平台介绍大数据平台架构演变如图所示魅族大数据平台架构演变历程年底,我们开始实践大数据,并部署了测试集群。因此,大数据运维的目标是以解决运维复杂度的自动化为首要目标。大数据运维存在的问题大数据运维存在的问题包括部署及运维复杂。 一、大数据平台介绍 1.1大数据平台架构演变 showImg(https://segmentfault.com/img/bVWDPj?w=1024&h=...
阅读 1347·2023-01-11 13:20
阅读 1686·2023-01-11 13:20
阅读 1133·2023-01-11 13:20
阅读 1860·2023-01-11 13:20
阅读 4103·2023-01-11 13:20
阅读 2705·2023-01-11 13:20
阅读 1386·2023-01-11 13:20
阅读 3599·2023-01-11 13:20