We're trying so setup oracle authentication via microsoft AD, and there are some issues with kvno.
For example, for a 4 node cluster, each keytab file has a different kvno:
KVNO Principal
---- --------------------------------------------------------------------------
12 ORACLE/NODE1#xxxxxx
---- --------------------------------------------------------------------------
13 ORACLE/NODE2#xxxxxx
---- --------------------------------------------------------------------------
14 ORACLE/NODE3#xxxxxx
---- --------------------------------------------------------------------------
15 ORACLE/NODE4#xxxxxx
[oracle#teste]$ kvno ORACLE/NODE1#xxxxxx
ORACLE/NODE1#xxxxxx: kvno = 15
[oracle#teste]$ kvno ORACLE/NODE2#xxxxxx
ORACLE/NODE2#xxxxxx: kvno = 15
[oracle#teste]$ kvno ORACLE/NODE3#xxxxxx
ORACLE/NODE3#xxxxxx: kvno = 15
[oracle#teste]$ kvno ORACLE/NODE4#xxxxxx
ORACLE/NODE4#xxxxxx: kvno = 15
NODE4 is the only that matches nr 15.
I dont know how keytab is being generated on AD, because that's a different team, they send the files to us, but the command that is being used is like this:
ktpass -princ ORACLE/NODE1#xxxxxx -mapUser username -pass -crypto ALL -ptype KRB5_NT_PRINCIPAL -out c:\name.keytab
The question is, what should we do to have same kvno on all keytab files?
Thks.
Related
When trying to decrypt the kerberos by using the keytab file it shows the error "missing keytype 18". The keytab file has the keytype 18.
Based on the information you shared:
SPN: HOST/INTVMDC03.xxxx.com/xxxx.com.
Keytab entries:
1 1013219#xxxx.com (18:AES256 CTS mode with HMAC
SHA1-96) 1 1013219#xxxx.com (17:AES128 CTS mode with HMAC SHA1-96) 1
1013219#xxxx.com (20:AES256 CTS mode with HMAC SHA384-192) 1
1013219#xxxx.com (19:AES128 CTS mode with HMAC SHA256-128) 1
1013219#xxxx.com (16:DES3 CBC mode with SHA1-KD) 1 1013219#xxxx.com
(23:RC4 with HMAC)
There is no entry corresponding to the SPN being used inside your keytab.
What you need is SPN entries inside keytab, not the UPN entries.
Remember that the ticket is issued for the SPN and not the user principal name (UPN). Therefore Kerberos looks for the SPN entry inside keytab for which the ticket is issued.
Please generate a new keytab file and provide SPN.
For windows, you can use ktpass command (usually works on windows server os).
Check Here.
For example:
ktpass /out <filename> /princ <ServicePrincipalName> /mapuser <UserPrincipalName> /pass <UPN password> /crypto ALL /ptype KRB5_NT_PRINCIPAL /kvno 0
Seeing nonsense values for user names in folder permissions for NFS mounted HDFS locations, while the HDFS locations themselves (using Hortonworks HDP 3.1) appear fine. Eg.
➜ ~ ls -lh /nfs_mount_root/user
total 6.5K
drwx------. 3 accumulo hdfs 96 Jul 19 13:53 accumulo
drwxr-xr-x. 3 92668751 hadoop 96 Jul 25 15:17 admin
drwxrwx---. 3 ambari-qa hdfs 96 Jul 19 13:54 ambari-qa
drwxr-xr-x. 3 druid hadoop 96 Jul 19 13:53 druid
drwxr-xr-x. 2 hbase hdfs 64 Jul 19 13:50 hbase
drwx------. 5 hdfs hdfs 160 Aug 26 10:41 hdfs
drwxr-xr-x. 4 hive hdfs 128 Aug 26 10:24 hive
drwxr-xr-x. 5 h_etl hdfs 160 Aug 9 14:54 h_etl
drwxr-xr-x. 3 108146 hdfs 96 Aug 1 15:43 ml1
drwxrwxr-x. 3 oozie hdfs 96 Jul 19 13:56 oozie
drwxr-xr-x. 3 882121447 hdfs 96 Aug 5 10:56 q_etl
drwxrwxr-x. 2 spark hdfs 64 Jul 19 13:57 spark
drwxr-xr-x. 6 zeppelin hdfs 192 Aug 23 15:45 zeppelin
➜ ~ hadoop fs -ls /user
Found 13 items
drwx------ - accumulo hdfs 0 2019-07-19 13:53 /user/accumulo
drwxr-xr-x - admin hadoop 0 2019-07-25 15:17 /user/admin
drwxrwx--- - ambari-qa hdfs 0 2019-07-19 13:54 /user/ambari-qa
drwxr-xr-x - druid hadoop 0 2019-07-19 13:53 /user/druid
drwxr-xr-x - hbase hdfs 0 2019-07-19 13:50 /user/hbase
drwx------ - hdfs hdfs 0 2019-08-26 10:41 /user/hdfs
drwxr-xr-x - hive hdfs 0 2019-08-26 10:24 /user/hive
drwxr-xr-x - h_etl hdfs 0 2019-08-09 14:54 /user/h_etl
drwxr-xr-x - ml1 hdfs 0 2019-08-01 15:43 /user/ml1
drwxrwxr-x - oozie hdfs 0 2019-07-19 13:56 /user/oozie
drwxr-xr-x - q_etl hdfs 0 2019-08-05 10:56 /user/q_etl
drwxrwxr-x - spark hdfs 0 2019-07-19 13:57 /user/spark
drwxr-xr-x - zeppelin hdfs 0 2019-08-23 15:45 /user/zeppelin
Notice the difference for users ml1 and q_etl that they have numerical user values when running ls on the NFS locations, rather then their user names.
Even doing something like...
[hdfs#HW04 ml1]$ hadoop fs -chown ml1 /user/ml1
does not change the NFS permissions. Even more annoying, when trying to change the NFS mount permissions as root, we see
[root#HW04 ml1]# chown ml1 /nfs_mount_root/user/ml1
chown: changing ownership of ‘/nfs_mount_root/user/ml1’: Permission denied
This causes real problems, since the differing uid means that I can't access these dirs even as the "correct" user to write to them. Not sure what to make of this. Anyone with more Hadoop experience have any debugging suggestions or fixes?
UPDATE:
Doing a bit more testing / debugging, found that the rules appear to be...
If the NFS server node has no uid (or gid?) that matches the uid of the user on the node accessing the NFS mount, we get the weird uid values as seen here.
If there is a uid associated to the username of the user on the requesting node, then that is the uid user that we see assigned to the location when accessing via NFS (even if that uid on the NFS server node is not actually for the requesting user), eg.
[root#HW01 ~]# clush -ab id ml1
---------------
HW[01,04] (2)
---------------
uid=1025(ml1) gid=1025(ml1) groups=1025(ml1)
---------------
HW[02-03] (2)
---------------
uid=1027(ml1) gid=1027(ml1) groups=1027(ml1)
---------------
HW05
---------------
uid=1026(ml1) gid=1026(ml1) groups=1026(ml1)
[root#HW01 ~]# exit
logout
Connection to hw01 closed.
➜ ~ ls -lh /hdpnfs/user
total 6.5K
...
drwxr-xr-x. 6 atlas hdfs 192 Aug 27 12:04 ml1
...
➜ ~ hadoop fs -ls /user
Found 13 items
...
drwxr-xr-x - ml1 hdfs 0 2019-08-27 12:04 /user/ml1
...
[root#HW01 ~]# clush -ab id atlas
---------------
HW[01,04] (2)
---------------
uid=1027(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW[02-03] (2)
---------------
uid=1024(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW05
---------------
uid=1005(atlas) gid=1006(hadoop) groups=1006(hadoop)
If wondering why I have, user on the cluster that have varying uids across the cluster nodes, see the problem posted here: How to properly change uid for HDP / ambari-created user? (note that these odd uid setting for hadoop service users was set up by Ambari by default).
After talking with someone more knowledgeable in HDP hadoop, found that the problem is that when Ambari was setup and run to initially install the hadoop cluster, there may have been other preexisting users on the designated cluster nodes.
Ambari creates its various service users by giving them the next available UID of a nodes available block of user UIDs. However, prior to installing Ambari and HDP on the nodes, I created some users on the to-be namenode (and others) in order to do some initial maintenance checks and tests. I should have just done this as root. Adding these extra users offset the UID counter on those nodes and so as Ambari created users on the nodes and incremented the UIDs, it was starting from different starting counter values. Thus, the UIDs did not sync and caused problems with HDFS NFS.
To fix this, I...
Used Ambari to stop all running HDP services
Go to Service Accounts in Ambari and copy all of the expected service users name strings
For each user, run something like id <service username> to get the group(s) for each user. For service groups (which may have multiple members), can do something like grep 'group-name-here' /etc/group. I recommend doing it this way as the Ambari docs of default users and groups does not have some of the info that you can get here.
Use userdel and groupdel to remove all the Ambari service users and groups
Then recreate all the groups across the cluster
Then recreate all the users across the cluster (may need to specify UID if nodes have other users not on others)
Restart the HDP services (hopefully everything should still run as if nothing happend, since HDP should be looking for the literal string (not the UIDs))
For the last parts, can use something like clustershell, eg.
# remove user
$ clush -ab userdel <service username>
# check that the UID you want to use is actually available on all nodes
$ clush -ab id <some specific UID you want to use>
# assign that UID to a new service user
$ clush -ab useradd --uid <the specific UID> --gid <groupname> <service username>
To get the lowest common available UID from each node, used...
# for UID
getent passwd | awk -F: '($3>1000) && ($3<10000) && ($3>maxuid) { maxuid=$3; } END { print maxuid+1; }'
# for GID
getent passwd | awk -F: '($4>1000) && ($4<10000) && ($4>maxuid) { maxuid=$4; } END { print maxuid+1; }'
Ambari also creates some /home dirs for users. Once you are done recreating the users, will need to change the permissions for the dirs (can also use something like clush there as well).
* Note that this was a huge pain and you would need to manually correct the UIDs of users whenever you added another cluster node. I did this for a test cluster, but for production (or even a larger test) you should just useKerberos or SSSD + Active Directory.
I am trying to use LFTP to mirror a remote FTP running on Windows (that's all I know about the configuration of it. Also, I just have reading access)
I'm running the following shell script:
#!/bin/bash
HOST='omitted'
USER='omitted'
PASS='omitted'
LOCALFOLDER="omitted"
REMOTEFOLDER="/Initial data/Practice area/Intellectual Property/"
lftp -f "
debug -o debug.text 9
open $HOST
user $USER $PASS
cd '$REMOTEFOLDER'
ls
mirror --reverse --verbose '$REMOTEFOLDER' '$LOCALFOLDER'
bye
"
After running it, I get the following output:
source: Is a directory
drwxr-xr-x 1 ftp ftp 0 Mar 23 2017 03.17.17
drwxr-xr-x 1 ftp ftp 0 Nov 05 2016 2016.10.03
drwxr-xr-x 1 ftp ftp 0 Nov 05 2016 2016.10.07
drwxr-xr-x 1 ftp ftp 0 Feb 23 2017 2017.02.21
drwxr-xr-x 1 ftp ftp 0 Feb 26 2017 2017.02.24
drwxr-xr-x 1 ftp ftp 0 Mar 02 2017 2017.02.27
drwxr-xr-x 1 ftp ftp 0 Apr 11 2017 2017.03.17
drwxr-xr-x 1 ftp ftp 0 Mar 28 2017 2017.03.27
drwxr-xr-x 1 ftp ftp 0 Apr 04 2017 2017.03.31
drwxr-xr-x 1 ftp ftp 0 Aug 09 08:34 2017.04.06
drwxr-xr-x 1 ftp ftp 0 Jun 07 2017 2017.05.31
drwxr-xr-x 1 ftp ftp 0 Jul 17 10:52 2017.07.17
drwxr-xr-x 1 ftp ftp 0 Feb 19 2017 New Folder
mirror: Access failed: /Initial data/Practice area/Intellectual Property: No such file or directory
As you can see, I can list all files inside of the folder that I want to mirror but the mirror command fails.
And following is the debug output:
---- Resolving host address...
---- 1 address found: <omitted>
---- Connecting to <omitted> (<omitted>) port <omitted>
<--- 220 Welcome to <omitted>
---> FEAT
<--- 211-Features:
<--- MDTM
<--- REST STREAM
<--- SIZE
<--- MLST type*;size*;modify*;
<--- MLSD
<--- UTF8
<--- CLNT
<--- MFMT
<--- 211 End
---> CLNT lftp/4.6.3a
<--- 200 Don't care
---> OPTS UTF8 ON
<--- 530 Please log in with USER and PASS first.
---> USER <omitted>
<--- 331 Password required for <omitted>
---> PASS <omitted>
<--- 230 Logged on
---> CLNT lftp/4.6.3a
<--- 200 Don't care
---> OPTS UTF8 ON
<--- 200 UTF8 mode enabled
---> PWD
<--- 257 "/" is current directory.
---- CWD path to be sent is `/Initial data/Practice area/Intellectual Property'
---> CWD Initial data
<--- 250 CWD successful. "/Initial data" is current directory.
---> CWD Practice area
<--- 250 CWD successful. "/Initial data/Practice area" is current directory.
---> CWD Intellectual Property
<--- 250 CWD successful. "/Initial data/Practice area/Intellectual Property" is current directory.
---> PASV
<--- 227 Entering Passive Mode (<omitted>)
---- Connecting data socket to (<omitted>) port <omitted>
---- Data connection established
---> LIST
<--- 150 Connection accepted
---- Got EOF on data connection
---- Closing data socket
<--- 226 Transfer OK
**** /Initial data/Practice area/Intellectual Property: No such file or directory
---> QUIT
<--- 221 Goodbye
---- Closing control socket
I really appreciate if you have any idea of what's wrong :)
Thanks,
Remove the --reverse option which is for uploading to the server.
I am using hadoop with kerberos keytab file name userid.keytab for a long while. But now i m not aware the password. Is it anyway to get password from the keytab file.
No, you can't.
The only thing you can get from a keytab file is the principal name:
$ ktutil
ktutil: read_kt test.wtk
ktutil: list
slot KVNO Principal
---- ---- ---------------------------------------------------------------------
1 1 hadoop_app#BLALBLABLA.LOC
Keytab contains pairs of principal and
encrypted keys (which are derived from the Kerberos password), no way to get back the password from these data.
Keytab has a principal name at the very least, but can also hold the NTLM hash of the password, next to AES hashes of the same password.
Extract hashes with https://github.com/sosdave/KeyTabExtract
My COBOL program cannot connect to oracle when the password field is defined longer than actual password length for a user. i.e, if the password value is 'mypasswd', the host variable to keep password must be defined with "PIC X(8)", otherwise, connection will fail; for example:
1 IDENTIFICATION DIVISION.
2 PROGRAM-ID. SAMPLE.
3 ENVIRONMENT DIVISION.
4 DATA DIVISION.
5 WORKING-STORAGE SECTION.
6 EXEC SQL BEGIN DECLARE SECTION END-EXEC.
7 01 USERNAME PIC X(010).
8 01 PASSWD PIC X(010).
9 01 DBSTRING PIC X(020).
10 EXEC SQL END DECLARE SECTION END-EXEC.
11 EXEC SQL INCLUDE SQLCA END-EXEC.
12
13 PROCEDURE DIVISION.
14 BEGIN-PGM.
15 EXEC SQL WHENEVER SQLERROR
16 DO PERFORM SQL-ERROR
17 END-EXEC.
18 LOGON.
19 MOVE "myuser" TO USERNAME.
20 MOVE "mypasswd" TO PASSWD.
21 MOVE "mydb" TO DBSTRING.
22 EXEC SQL
23 CONNECT :USERNAME IDENTIFIED BY :PASSWD USING :DBSTRING
24 END-EXEC.
25 LOGOUT.
26 DISPLAY "HAVE A GOOD DAY.".
27 EXEC SQL COMMIT WORK RELEASE END-EXEC.
28 STOP RUN.
29 SQL-ERROR.
30 EXEC SQL WHENEVER SQLERROR CONTINUE END-EXEC.
31 DISPLAY "ORACLE ERROR DETECTED:".
32 DISPLAY SQLERRMC.
33 EXEC SQL ROLLBACK WORK RELEASE END-EXEC.
34 STOP RUN.
I must get a connect failure:
ORACLE ERROR DETECTED:
ORA-01017: invalid username/password; logon denied
But when I change the password field definition to:
8 01 PASSWD PIC X(008).
i.e. length is the same length of real password value (length("mypasswd")=8), the program can connect to Oracle successfully.
My situation is that we need users to be able to provide their own username and password, so we must firstly define a username and password fields long enough to keep the maximum length we allow. However, as stated above, all connection requests will be failed if a user chooses a shorter password than the maximum.
The program is migrated from an old version of Oracle 11.2.0.1.0, where we don't have this issue, the program was working fine, the connect operation was successful.
But the problem occurred after we migrate to Oracle 12.1.0.1.0.
If you are using Pro*COBOL, then this link is for you: http://docs.oracle.com/cd/A57673_01/DOC/api/doc/PCO18/ch1.htm#toc024
It shows how to define your username and password fields as VARYING.
WORKING STORAGE SECTION.
...
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 USERNAME PIC X(10) VARYING.
01 PASSWD PIC X(10) VARYING.
...
EXEC SQL END DECLARE SECTION END-EXEC.
...
PROCEDURE DIVISION.
LOGON.
MOVE "SCOTT" TO USERNAME-ARR.
MOVE 5 TO USERNAME-LEN.
MOVE "TIGER" TO PASSWD-ARR.
MOVE 5 TO PASSWD-LEN.
EXEC SQL WHENEVER SQLERROR GOTO LOGON-ERROR END-EXEC.
EXEC SQL
CONNECT :USERNAME IDENTIFIED BY :PASSWD
END-EXEC.
As it turns out, the example quoted is not directly useful to you (from Comments) because your passwords may not be five in length.
This really is no problem. You can calculate the length of the password for a given user, and then, instead of using the literal 5, use the value that you have calculated.
#NealB has shown in his answer a simple way to do this (if you can have no leading or embedded blanks in a password).
INSPECT PASSWD TALLYING PSSWDLEN FOR ALL SPACE
COMPUTE PSSWDLEN = LENGTH OF PASSWD - PSSWDLEN
If you are unable to use that method, a simple loop-construct of your choice starting from the last byte of the password field and continuing whilst a space is encountered. Watch out for the entirely-space possibility.
You may want to use the same technique for username anyway, as it would be more transportable amongst different flavours of Oracle/OS (depending on what it is that is allowing it to work for you). I'd do that, unless it it absolutely impossible that it is ever required.
You do mention a move to a new Oracle version. This behaviour should be documented in the Summary of Changes, or similar, section of the documentation. If you cannot find reference to it, contact Oracle and find out what is going on.
If you are not using Pro*COBOL, you may be able to emulate the effect of VARYING.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 USERNAME.
05 USERNAME-LEN BINARY PIC 9(4).
05 USERNAME-VALUE PIC X(10).
01 PASSWD.
05 PASSWD-LEN BINARY PIC 9(4).
05 PASSWD-VALUE PIC X(10).
END-EXEC.
Then:
LOGON.
MOVE "SCOTT" TO USERNAME-VALUE.
MOVE 5 TO USERNAME-LEN.
MOVE "TIGER" TO PASSWD-VALUE.
MOVE 5 TO PASSWD-LEN.
EXEC SQL WHENEVER SQLERROR GOTO LOGON-ERROR END-EXEC.
EXEC SQL
CONNECT :USERNAME IDENTIFIED BY :PASSWD
END-EXEC.
You may have to try:
01 USERNAME.
05 USERNAME-LEN BINARY PIC 9(4).
05 USERNAME-VALUE.
10 FILLER OCCURS 1 TO 10 TIMES
DEPENDING ON USERNAME-LEN.
15 FILLER PIC X.
01 PASSWD.
05 PASSWD-LEN BINARY PIC 9(4).
05 PASSWD-VALUE.
10 FILLER OCCURS 1 TO 10 TIMES
DEPENDING ON PASSWD-LEN.
15 FILLER PIC X.
END-EXEC.
If getting nowhere with suggestion, you need to supply more information, like OS, version of COBOL, version of Oracle, and what you have tried and what results you got with those attempts.
Have you tried using reference modification to adjust the length of the username/password on the connect request?
I am not an Oracle kind-of-guy, but something like this might work:
22 EXEC SQL
23 CONNECT :USERNAME(1:UNAMELEN) IDENTIFIED BY :PASSWD(1:PSSWDLEN) USING :DBSTRING
24 END-EXEC.
where the UNAMELEN and PSSWDLEN are numeric variables (e.g. PIC S9(4) BINARY) containing the actual lengths of the user name and password.
Determining the values for UNAMELEN and PSSWDLEN can be done using the INSPECT verb something like this:
INSPECT PASSWD TALLYING PSSWDLEN FOR ALL SPACE
COMPUTE PSSWDLEN = LENGTH OF PASSWD - PSSWDLEN
This will work provided that passwords and user names do not contain internal blank spaces. If they do you will have to compute the actual lengths differently.