Where we configure the credential in PutHDFS processor in Apache NiFi - hortonworks-data-platform

I have configured a path in PutHDFS its throwing an authentication error
LOG:
2017-03-03 01:52:29,200 DEBUG [IPC Client (1496249304) connection to dnn01.com/10.4.151.88:8020 from root] org.apache.hadoop.ipc.Client IPC Client (1496249304) connection to dnn01.com/10.4.151.88:8020 from root got value #39976 2017-03-03 01:52:29,201 TRACE [Timer-Driven Process Thread-7] org.apache.hadoop.ipc.ProtobufRpcEngine 105: Exception <- dnn01.com/10.4.151.88:8020: create {org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/raw/externaltbls/falcon/testing/.1PUGETSLA_PO810.P0125.EDIINV.P20150125.107.20160304025143629.gz":hdfs:hdfs:drwxrwxr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
2017-03-03 01:52:29,201 ERROR [Timer-Driven Process Thread-7] o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=015a1010-9c64-1ed3-c39b-d19ab2dfe19b] Failed to write to HDFS due to org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=015a1010-9c64-1ed3-c39b-d19ab2dfe19b]: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/raw/externaltbls/falcon/testing/.1PUGETSLA_PO810.P0125.EDIINV.P20150125.107.20160304025143629.gz":hdfs:hdfs:drwxrwxr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

The PutHDFS processor is going to execute as the OS user that NiFi is running as. In your case it looks like you are running NiFi as root because the log says "Permission denied: user=root, access=WRITE".
Your options are:
Give root WRITE access to the directory you are writing to (/raw/externaltbls/falcon/testing/)
Run NiFi as a different user who has WRITE access
Use Kerberos and specify the principal and keytab in the PutHDFS processor

Related

ERROR when run start-dfs.sh in Hadoop-3.2.0

I meet some problems in the configuration of Hadoop3.2.1 for YARN learning. And I found that there are two different conditions in user root and user host1 when I run the sbin/start-all.sh. Can you tell me how to solve it and whether it has connection with the SSH? Thank you very much.
In Root:
root#host1-virtual-machine:/home/host1/usr/hadoop-3.2.1# sbin/start-all.sh
Starting namenodes on [localhost]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [host1-virtual-machine]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
2020-02-12 14:40:27,093 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
In host1 user:
host1#host1-virtual-machine:~/usr/hadoop-3.2.1$ sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as host1 in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
localhost: host1#localhost: Permission denied (publickey,password).
Starting datanodes
localhost: host1#localhost: Permission denied (publickey,password).
Starting secondary namenodes [host1-virtual-machine]
host1-virtual-machine: host1#host1-virtual-machine: Permission denied (publickey,password).
Starting resourcemanager
Starting nodemanagers
localhost: host1#localhost: Permission denied (publickey,password).
You need to set up passwordless connection between nodes. This link might help
http://mynotesonhadoop.blogspot.com/2017/07/configuring-passwordless-ssh-from.html?m=1

Hadoop Access Control Exception: Permissions

Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE,
inode="/mnt/var/lib/hadoop/tmp/2204827016_Attaching_UU_Codes_5C4141BF22014C8FAD3CD045070589C0/_temporary/1":hadoop:hadoop:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
It seems that only the user 'hadoop' can write to this location. What are the possible workarounds?
Your user rights are not enough try sudo or chomnd setting?

Permission denied error while creating database through java in hive

I have tried to create database in hive using java.
But I got this error while running the code:
Exception in thread "main" java.sql.SQLException: Error while
processing statement: FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got
exception: org.apache.hadoop.security.AccessControlException
Permission denied: user=hive, access=WRITE,
inode="/user/hive/warehouse/sampledb.db":root:supergroup:drwxr-xr-x
I have given read an write privileges to the folder /user/hive/warehouse/. Still I am getting this error. Any remedies???

Error with datastax enterprise with hadoop enable and kerberos enable

I have configured dse with hadoop enable and kerberos authentication. But I see this ERROR in the log. I can execute dse hadoop fs commands and nodetool commands but cannot run map reduce jobs.
The following is the log :-
ERROR [TASK-TRACKER-INIT] 2014-02-07 20:45:03,813 TaskTrackerRunner.java (line 128) Hadoop Task Tracker caused an exception in state STARTING:
java.io.IOException: Cannot run program "/usr/share/dse/hadoop/native/Linux-amd64- 64/bin/task-controller" (in directory "."): error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at org.apache.hadoop.util.Shell.startProcess(Shell.java:199)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:225)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:401)
at org.apache.hadoop.mapred.LinuxTaskController.setup(LinuxTaskController.java:137)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1470)
at com.datastax.bdp.hadoop.mapred.TaskTrackerRunner.initService(TaskTrackerRunner.java:104)
at com.datastax.bdp.hadoop.mapred.TaskTrackerRunner.initService(TaskTrackerRunner.java:31)
at com.datastax.bdp.hadoop.mapred.ServiceRunner.run(ServiceRunner.java:121)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: error=13, Permission denied
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
... 10 more
ERROR [Thrift:8] 2014-02-07 20:45:12,624 TNegotiatingServerTransport.java (line 293) An error occurred during transport negotiation
com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
ERROR [Thrift:8] 2014-02-07 20:45:12,625 TNegotiatingServerTransport.java (line 524) Failed to open server transport.
com.datastax.bdp.transport.common.TTransportNegotiationException: An error occurred during transport negotiation
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:294)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
... 7 more
ERROR [Thrift:8] 2014-02-07 20:45:12,626 CustomTThreadPoolServer.java (line 219) Error occurred during processing of message.
java.lang.RuntimeException: Failed to open server transport: unknown
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:525)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: An error occurred during transport negotiation
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:294)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
... 5 more
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
... 7 more
WARN [TASK-TRACKER-INIT] 2014-02-07 20:45:13,828 MetricsSystemImpl.java (line 200) Source name ugi already exists!
This is the task controller :-
-rwsr-x--- 1 root cassandra 40111 Jan 9 18:14 /usr/share/dse/hadoop/native/Linux-amd64-64/bin/task-controller
I am using
DSE 3.2.3
Java 1.7.0_25
I have configured properly in cassandra.yaml, dse.yaml, core-site.xml, mapre-site.xml, /etc/default/dse files
You should not start the daemon from root home directory. This sounds weird but try to start the daemon other than root home Directory.
/usr/share/dse/hadoop/native/Linux-amd64- 64/bin/task-controller" (in directory "."): error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
Above clearly shows the user doesn't have the permission to run it. Change the permission so that it has execute permission for the user. Use chmod to change the permission

Installing Hue, permission denied error?

I'm getting the following while trying to build Hue:
(6211) *** Controller starting at Thu Aug 8 11:29:50 2013
Should start 1 new children
Controller.spawn_children(number=1)
$HADOOP_HOME=
$HADOOP_BIN=/usr/local/hadoop/bin/hadoop
$HIVE_CONF_DIR=~/hive-0.10.0/conf
$HIVE_HOME=~/hive-0.10.0
find: `~/hive-0.10.0/lib': No such file or directory
$HADOOP_CLASSPATH=:
$HADOOP_OPTS=-Dlog4j.configuration=log4j.properties
$HADOOP_CONF_DIR=~/hive-0.10.0/conf:/usr/local/hadoop/conf
$HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
CWD=/usr/local/hue/desktop/conf
Executing /usr/local/hadoop/bin/hadoop jar /usr/local/hue/apps/beeswax/src/beeswax/../../java-lib/BeeswaxServer.jar --beeswax 8002 --desktop-host 127.0.0.1 --desktop-port 8888 --query-lifetime 604800000 --metastore 8003
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:119)
I've changed the configuration file so it doesn't use hue but the user that I'm logged in as which has read and write permissions in the hadoop dfs, hadoop, hive, etc. Not sure why it's doing this...
It seems that it is starting to start Beeswax in /usr/local/hue/desktop/conf. Beeswax should be running as the 'hue' user by default (https://github.com/cloudera/hue/blob/master/desktop/core/src/desktop/supervisor.py#L67) so it need to be writable by 'hue'.

Resources