How to configure Hive Cli to automatically get the kerberos ticket and renew/request new if expires by it own - hadoop

Hi I am new to Hive and kerberos.
I have some hive jobs which run more then life time of ticket. how can I configure hive so that when I start hive shell if ticket is not cached it automatically request for ticket. After acquiring ticket lets suppose if ticket expire is the middle then automatically acquire new one and also I may have simultaneous job running by same user so may be one cached ticket can be used by many jobs.
Any Solutions or direction to look upon will be highly appreciated.
Thanks in Advance.
I am looking for a solution in which hive cli or shell can automatically acquire or renew Kerberos credentials.

What you need to look into is Java Authentication and Authorization Service (JAAS)
It's how to enable java to use kerberos without adding anything to your code. Specifically here you might want to look at how beeline uses kerberos config as an example.
Create setEnv.sh file and save it inside "bin" folder. Paste below
content inside it:
export HADOOP_HOME=/home/user/beeline/hadoop-2.5.1
export HIVE_HOME=/home/user/beeline/apache-hive-1.2.1-bin
export JAVA_HOME=/home/user/beeline/jre
PATH=$PATH:$HIVE_HOME/bin:$JAVA_HOME/bin
export HADOOP_OPTS="$HADOOP_OPTS -Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/home/user/beeline/conf/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Djava.security.auth.login.config=/home/user/beeline/conf/jaas.conf"
jaas.conf File:
Create and save jaas.conf file under conf folder
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
useTicketCache=true;
};
krb5.conf File:
Create and save krb5.conf File under conf folder. Modify this file as
per your environment.
[logging]
default = FILE:~/krb5libs.log
kdc = FILE:~/krb5kdc.log
admin_server = FILE:~/kadmind.log
kdc_rotate = {"period"=>"1d", "versions"=>200}
admin_server_rotate = {"period"=>"1d", "versions"=>201}
[libdefaults]
default_realm = DOMAIN.COM
dns_lookup_realm = false
dns_lookup_kdc = false
forwardable = true
renew_lifetime = 30d
ticket_lifetime = 30d
renewable = yes
service = yes
kdc_timeout = 5000
default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts arcfour-hmac-md5 des-cbc-crc des-cbc-md5 des-hmac-sha1
default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts arcfour-hmac-md5 des-cbc-crc des-cbc-md5 des-hmac-sha1
allow_weak_crypto = yes
udp_preference_limit = 1
[realms]
DOMAIN.COM = {
kdc = kdcserver.domain.com:88
default_domain = domain.com
}
[domain_realm]
.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM
[appdefaults]
pam = {
debug = false
forwardable = true
renew_lifetime = 36000
ticket_lifetime = 36000
krb4_convert = false
}
It should be noted that the above config doesn't use a renewable kerberos ticket but that's just and example and you can make it renewable.

Related

Kerberos to work on my ansible setup Minor code may provide more information', 851968) ('Server not found in Kerberos database', -1765328377))

Been having some issues with setting up kerberos within my lab setup.
Ansible server: Ubuntu
AD server: Win 2016 server
Target server: Win 2016 server
Please note that I can get ansible working with my target server when using local authentication.
What have I done ?
read Using Ansible on windows with domain user
https://osric.com/chris/accidental-developer/2017/01/error-cannot-contact-any-kdc-for-realm-while-getting-initial-credentials/
Here is my inventory server
[sqlservers]
myserver.mylab.local ansible_host=192.x.x.x
[sqlservers:vars]
ansible_user = ansible-user#MYLAB.LOCAL
ansible_password = xxxxxx
ansible_connection = winrm
ansible_winrm_transport = kerberos
ansible_port = 5986
ansible_winrm_server_cert_validation = ignore
#ansible_winrm_kinit_cmd = "/opt/VA/uxauth/bin/uxconsole -krb -init"
ansible_winrm_kerberos_delegation = true
Contents of the krb5.conf file
[libdefaults]
default_realm = MYLAB.LOCAL
[realms]
MYLAB.LOCAL = {
kdc = adservver.mylab.local
admin_server = adserver.mylab.local
default_domain = mylab.local
}
[domain_realm]
.mylab.local = MYLAB.LOCAL
mylab.local = MYLAB.LOCAL
I get the error message below.
fatal: [myserver.mylab.local]: UNREACHABLE! => {"changed": false, "msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))", "unreachable": true}
To test that I can get a kerberos token, I am able to run the commands below.
kinit -C ansible-user#MYLAB.LOCAL
klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: ansible-user#MYLAB.LOCAL
Valid starting Expires Service principal
05/21/21 10:50:42 05/21/21 20:50:42 krbtgt/MYLAB.LOCAL#MYLAB.LOCAL
renew until 05/28/21 10:50:39
Thanks all for your answers, in the end I checked everyting, and it looks like a reboot of the AD server resolved the issue.
Also when specifying the credentials for the playbook to use, the domain name needs to be in capitals. If the domain name is not in uppercase, you get a KDC error.
ansible-user#MYDOMAIN.COM

How to setup Kerberos realm without domain name

I'm currently setting up Kerberos for an Ambari Hortonworks environment. For a number of reasons, I'm unable to use a distinct domain name as the realm name for this install. This is strange because - from what I read - the realm name is just set to the domain name by convention. In theory it can be any ASCII string.
For this Ambari environment I'm essentially trying to set up Kerberos where
[libdefaults]
default_realm = FOOBAR
In fact, my current krb5.conf looks something like this:
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = FOOBAR
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
rdns = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
#Not sure how to use this mapping property in this case
FOOBAR = FOOBAR
.FOOBAR = FOOBAR
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
FOOBAR = {
admin_server = {admin ip adress}
kdc = {kdc ip address}
}
/etc/hosts
{kdc ip address} FOOBAR kdc
One ought to be able to short-circuit the DNS check with the hosts file. But I can't seem to get Kerberos working this way. All the documentation I found so far online describes the nice, safe setup following the DNS convention.
Can anyone point to a tutorial, or describe the necessary steps to make Kerberos work without a domain name?
Given the lack of helpful response I'll just share what I end up using (works but might not be optimal)
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = FOOBAR
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
FOOBAR = FOOBAR
.FOOBAR = FOOBAR
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
FOOBAR = {
admin_server = {admin_server ip}
kdc = {kdc_server ip}
}
In addition, be sure to add the ip addresses and hostnames for all machines in the cluster to /etc/hosts files.

mac + tableau + kerbores + hive + cloudera gssMinor code may provide more information (No credentials found with supported encryption type

I am unable to connect to tableau using the cloudera hive driver using kerberos authentication, even after configuring the kerb5.conf with appropriate info.
Issue is with the encryption property used in the /etc/krb5.conf file. I have removed the following lines before made it work.
default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac
the issue is with the encryption property used in the /etc/krb5.conf file. I have removed the following lines before made it work.
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
the following is the complete (sudo) contents of /etc/krb5.conf file.
[libdefaults]
default_realm = NA.CORP.xxx.com
dns_lookup_kdc = true
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
NA.xxx.COM = {
kdc = xxx.com
admin_server = xxx1.com
}
default_cc_type = FILE
default_ccache_name = FILE:/tmp/krb5cc_501
[domain_realm]

Get the ticket from KDC(centos7) in my windows but still cannot reach the web URL

I am new to Hadoop and I made a Hadoop cluster with 3 centos machine in my VMware, and I also kerberosing the cluster, it works fine in the VMware, I can reach the URL by FireFox in CenotOS machine
However, when I try to reach the page outside the VMware(in my windows machine) it always shows like this
I can ping each other by IP or hostname(I have set the hosts file)
I have got the ticket from KDC in my windows machine by MIT Kerberos, like this and when I type klist in my windows cmd, it showed the ticket.
I have set the firefox as suggested(as in centos I can reach the
page.)
what else should i set?
help please!
the ticket i got
this is my krb5.ini and krb5.conf in my windows and centos machine
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
renewable = true
rdns = false
default_realm = HADOOP.COM
[realms]
HADOOP.COM = {
kdc = master:88
admin_server = master:749
}
[domain_realm]
master = HADOOP.COM
slave1 = HADOOP.COM
slave2 = HADOOP.COM

session error when using multiple uwsgi worker and beaker session.typ is memory

i'm running a pyramid webapp, using velruse to make OAuth. if running the app alone, it succeeded.
but if running with uwsgi multiple and set session.type = memory.
request.session will not contain necessary token info when callback from oauth.
production.ini:
session.type = memory
session.data_dir = %(here)s/data/sessions/data
session.lock_dir = %(here)s/data/sessions/lock
session.key = mykey
session.secret = mysecret
[uwsgi]
socket = 127.0.0.1:6543
master = true
workers = 8
max-requests = 65536
debug = false
autoload = true
virtualenv = /home/myname/my_env
pidfile = ./uwsgi.pid
daemonize = ./mypyramid-uwsgi.log
If you use memory as session store only the worker in which the session data has been written will be able to use that infos. You should use another sessione store (that can be shared by all of the workers/processes)
your uWSGI config is not clear (it looks like it only contains the socket option). Can you re-paste it ?

Resources