"There is no primary group for UGI hdfs" - csi

while trying to read from ADLS getting an error "There is no primary group for UGI hdfs"
if you take closer look you found an error that HDFS user is not found.
please make sure that in ".zshrc" or in any other shell configuration file, you don't have
export HADOOP_USER_NAME=hdfs

Related

How to set hadoop class parameters in Hive Similar to Pig shown here?

I want Hive to automatically acquire kerberos ticket whenever hive(More specifically hive-shell not hive-server) is executed and also renew it automatically in between if job run more then timeout of ticket.
I found similar functionality in Pig. See This. I tested and it is working it acquires ticket automatically from keytab I don't have to acquire it manually using kinit and then start job. It renews tickets also whenever needed as mentioned in doc.
On some research I came across user-name-handling-in-hadoop. I found out similar log statement of dumping configuration parameters of class UserGroupInformation when starting hive.
As i wanted it every time hive is executed I tried it putting in HADOOP_OPTS which looks like this
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.krb5.conf=/etc/krb5.conf -Dhadoop.security.krb5.principal=root#MSI.COM -Dhadoop.security.krb5.keytab=/etc/security/keytab/user.service.keytab"
but whenever I execute it. It dumps following parameters which means it is not considering principle and keytab may be property name can be wrong since I have used names I found in Pig. It is observed that krb5.conf property is taken into consider as changing name of conf file shows default realm can't found as it is not able to read correct conf file.
23/01/23 23:33:28 DEBUG security.UserGroupInformation: hadoop login commit
23/01/23 23:33:28 DEBUG security.UserGroupInformation: using kerberos user:null
23/01/23 23:33:28 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
23/01/23 23:33:28 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
23/01/23 23:33:28 DEBUG security.UserGroupInformation: User entry: "root"
23/01/23 23:33:28 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
23/01/23 23:33:28 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:KERBEROS)
For any guidance Thanks in Advance
I ultimately want whenever hive-shell or hive-cli is invoked it automatically request for Kerberos ticket and renew it if needed.

Streamsets Mapr FS origin/dest. KerberosPrincipal exception (using hadoop impersonation (in mapr 6.0))

I am trying to do a simple data move from a mapr fs origin to a mapr fs destination (this is not my use case, just doing this simple movement for testing purposes). When trying to validate this pipeline, the error message I see in the staging area is:
HADOOPFS_11 - Cannot connect to the filesystem. Check if the Hadoop FS location: 'maprfs:///mapr/mycluster.cluster.local' is valid or not: 'java.io.IOException: Provided Subject must contain a KerberosPrincipal
Tyring different variations of the hadoop fs URI field (eg. mfs:///mapr/mycluster.cluster.local, maprfs:///mycluster.cluster.local) does not seem to help. Looking at the logs after trying to validate, I see
2018-01-04 10:28:56,686 mfs2mfs/mapr2sqlserver850bfbf0-6dc0-4002-8d44-b73e33fcf9b3 INFO Created source of type: com.streamsets.pipeline.stage.origin.maprfs.ClusterMapRFSSource#16978460 DClusterSourceOffsetCommitter *admin preview-pool-1-thread-3
2018-01-04 10:28:56,697 mfs2mfs/mapr2sqlserver850bfbf0-6dc0-4002-8d44-b73e33fcf9b3 INFO Error connecting to FileSystem: java.io.IOException: Provided Subject must contain a KerberosPrincipal ClusterHdfsSource *admin preview-pool-1-thread-3
java.io.IOException: Provided Subject must contain a KerberosPrincipal
....
2018-01-04 10:20:39,159 mfs2mfs/mapr2mapr850bfbf0-6dc0-4002-8d44-b73e33fcf9b3 INFO Authentication Config: ClusterHdfsSource *admin preview-pool-1-thread-3
2018-01-04 10:20:39,159 mfs2mfs/mapr2mapr850bfbf0-6dc0-4002-8d44-b73e33fcf9b3 ERROR Issues: Issue[instance='MapRFS_01' service='null' group='HADOOP_FS' config='null' message='HADOOPFS_11 - Cannot connect to the filesystem. Check if the Hadoop FS location: 'maprfs:///mapr/mycluster.cluster.local' is valid or not: 'java.io.IOException: Provided Subject must contain a KerberosPrincipal''] ClusterHdfsSource *admin preview-pool-1-thread-3
2018-01-04 10:20:39,169 mfs2mfs/mapr2mapr850bfbf0-6dc0-4002-8d44-b73e33fcf9b3 INFO Validation Error: Failed to configure or connect to the 'maprfs:///mapr/mycluster.cluster.local' Hadoop file system: java.io.IOException: Provided Subject must contain a KerberosPrincipal HdfsTargetConfigBean *admin 0 preview-pool-1-thread-3
java.io.IOException: Provided Subject must contain a KerberosPrincipal
....
However, to my knowledge, the system is not running Keberos, so this error message is a bit confusing for me. Uncommenting #export SDC_JAVA_OPTS="-Dmaprlogin.password.enabled=true ${SDC_JAVA_OPTS}" in the sdc environment variable file for native mapr authentication did not seem to help the problem (even when reinstalling and commenting this line before running the streamsets mapr setup script).
Does anyone have any idea what is happening and how to fix it? Thanks.
This answer was provided on the mapr community forums and worked for me (using mapr v6.0). Note that the instruction here differ from those currently provided by the streamsets documentation. Throughout these instructions, I was logged in as user root.
After installing streamsets (and the mapr prerequisites) as per the documentation...
Change the owner of the the streamsets $SDC_DIST or $SDC_HOME location to the mapr user (or whatever other user you plan to use for the hadoop impersonation): $chown -R mapr:mapr $SDC_DIST (for me this was the /opt/streamsets-datacollector dir.). Do the same for $SDC_CONF (/etc/sdc for me) as well as /var/lib/sdc and var/log/sdc.
In $SDC_DIST/libexec/sdcd-env.sh, set the user and group name (near the top of the file) to mapr user "mapr" and enable mapr password login. The file should end up looking like:
# user that will run the data collector, it must exist in the system
#
export SDC_USER=mapr
# group of the user that will run the data collector, it must exist in the system
#
export SDC_GROUP=mapr
....
# Indicate that MapR Username/Password security is enabled
export SDC_JAVA_OPTS="-Dmaprlogin.password.enabled=true ${SDC_JAVA_OPTS}
Edit the file /usr/lib/systemd/system/sdc.service to look like:
[Service]
User=mapr
Group=mapr
$cd into /etc/systemd/system/ and create a directory called sdc.service.d. Within that directory, create a file (with any name) and add the contents (without spaces):
Environment=SDC_JAVA_OPTS=-Dmaprlogin.passowrd.enabled=true
If you are using mapr's sasl ticket auth. system (or something similar), generate a ticket for the this user on the node that is running streamsets. In this case, with the $maprlogin password command.
Then finally, restart the sdc service: $systemctl deamon-reload then $systemctl retart sdc.
Run something like $ps -aux | grep sdc | grep maprlogin to check that the sdc process is ownned by mapr and that the -Dmaprlogin.passowrd.enabled=true parameter has been successfully set. Once this is done, should be able to validate/run maprFS to maprFS operations in streamsets pipeline builder in batch processing mode.
** NOTE: If using Hadoop Configuration Directory param. instead of Hadoop FS URI, remember to have the files in your $HADOOP_HOME/conf directory (eg.hadoop-site.xml, yarn-site.xml, etc.) (in the case of mapr, something like /opt/mapr/hadoop/hadoop-<version>/etc/hadoop/) either soft-linked or hard-copied to a directory $SDC_DIST/resource/<some hadoop config dir. you made need to create> (I just copy eberything in the directory) and add this path to the Hadoop Configuration Directory param. for your MaprFS (or HadoopFS). In the sdc web UI Hadoop Configuration Directory box, it would look like Hadoop Configuration Directory: <the directory within $SDC_DIST/resources/ that holds the hadoop files>.
** NOTE: If you are still logging errors of the form
2018-01-16 14:26:10,883
ingest2sa_demodata_batch/ingest2sademodatabatchadca8442-cb00-4a0e-929b-df2babe4fd41
ERROR Error in Slave Runner: ClusterRunner *admin
runner-pool-2-thread-29
com.streamsets.datacollector.runner.PipelineRuntimeException:
CONTAINER_0800 - Pipeline
'ingest2sademodatabatchadca8442-cb00-4a0e-929b-df2babe4fd41'
validation error : HADOOPFS_11 - Cannot connect to the filesystem.
Check if the Hadoop FS location: 'maprfs:///' is valid or not:
'java.io.IOException: Provided Subject must contain a
KerberosPrincipal'
you may also need to add -Dmaprlogin.password.enabled=true to the pipeline's /cluster/Worker Java Options tab for the origin and destination hadoop FS stages.
** The video linked to in the mapr community link also says to generate a mapr ticket for the sdc user (the default user that sdc process runs as when running as a service), but I did not do this and the solution still worked for me (so if anyone has any idea why it should be done regardless, please let me know in the comments).

Unable to create table in Hive

I am running below simple query to create a simple table.
create table test (id int, name varchar(20));
But I am getting the below error, please let know what need to be done exactly.
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/user
/hive/warehouse/test is not a directory or unable to create one)
I have given full read/write access to /user/hive/warehouse folder.
your hive user doesn't have permission for create director into hdfs. Whenever you create a table, hive will make a directory into User/hive/warehouse/table but here It's not able to create a directory into user/hive/warehouse/ so give permission to this directory to allow your user to create a table.
http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh4ig_topic_18_7.html
Sounds like a permissions issue. The change mode command below may help.
hadoop fs -chmod -R 755 /user/hive/warehouse/
The error message says
file:/user /hive/warehouse/test".
Despite that space between the /user and the rest of the path, file:/ means that Hive is trying to create that directory on your local file system instead on hdfs. There is probably problem with accessing configuration. I would check is HADOOP_CONF_DIR environment variable is properly initialized.
For me, the issue was exactly like yours(create internal table), not related to permission, but to storage, it was 100% used. Try checking for the same.

Could not find uri with key dfs.encryption.key.provider.uri to create a keyProvider in HDFS encryption for CDH 5.4

CDH Version: CDH5.4.5
Issue: When HDFS Encryption is enabled using KMS available in Hadoop CDH 5.4 , getting error while putting file into encryption zone.
Steps:
Steps for Encryption of Hadoop as follows:
Creating a key [SUCCESS]
[tester#master ~]$ hadoop key create 'TDEHDP'
-provider kms://https#10.1.118.1/key_generator/kms -size 128
tde group has been successfully created with options
Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
KMSClientProvider[https://10.1.118.1/key_generator/kms/v1/] has been updated.
2.Creating a directory [SUCCESS]
[tester#master ~]$ hdfs dfs -mkdir /user/tester/vs_key_testdir
Adding Encryption Zone [SUCCESS]
[tester#master ~]$ hdfs crypto -createZone -keyName 'TDEHDP'
-path /user/tester/vs_key_testdir
Added encryption zone /user/tester/vs_key_testdir
Copying File to encryption Zone [ERROR]
[tdetester#master ~]$ hdfs dfs -copyFromLocal test.txt /user/tester/vs_key_testdir
15/09/04 06:06:33 ERROR hdfs.KeyProviderCache: Could not find uri with
key [dfs.encryption.key.provider.uri] to create a keyProvider !!
copyFromLocal: No KeyProvider is configured, cannot access an
encrypted file 15/09/04 06:06:33 ERROR hdfs.DFSClient: Failed to close
inode 20823
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /user/tester/vs_key_testdir/test.txt.COPYING (inode
20823): File does not exist. Holder
DFSClient_NONMAPREDUCE_1061684229_1 does not have any open files.
Any idea/suggestion will be helpful.
This issue was crossposted here: https://community.cloudera.com/t5/Storage-Random-Access-HDFS/Could-not-find-uri-with-key-dfs-encryption-key-provider-uri-to/td-p/31637
Main conclusion: It is a non-issue
Here is the answer that was provided by the support staff:
CDH's base release versions are just that: base. The fix for the
harmless log print due to HDFS-7931 is present in all CDH5 releases
since CDH 5.4.1.
If you see that error in context of having configured a KMS, then its
a worthy one to consider. If you do not use KMS or EZs, then the error
may be ignored. Alternatively upgrade to the latest CDH5 (5.4.x or
5.5.x) releases to receive a bug fix that makes the error only appear when in the context of a KMS being configured over an encrypted path.
Per your log snippet, I don't see a problem (the canary does not
appear to be failing?). If you're trying to report a failure, please
send us more characteristics of the failure, as HDFS-7931 is a minor
issue with an unnecessary log print.

security.UserGroupInformation: PriviledgedActionException error for MR

Whenever i m trying to execute a map reduce job to write to Hbase table i am getting the following error in the console. I am running the MR job from the user account.
ERROR security.UserGroupInformation: PriviledgedActionException as:user cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/data1/input/Filename.csv
I did the hadoop ls, user is the owner of the file.
-rw-r--r-- 1 user supergroup 7998682 2014-04-17 18:49 /data1/input/Filename.csv
All my daemons are perfectly running, if i am using hbase client api, i am able to insert.
Please help, thanks in advance.
Thanks,
KG
If you look at the following path
Input path does not exist: file:/data1/input/Filename.csv
you can see that it is pointing to local filesystem not to hdfs. Try prefixing the filesystem type hdfs in the path as follows
hdfs://<NAMENODE-HOST>:<IPC-PORT>/data1/input/Filename.csv

Resources