I installed Impala via a parcel in the Cloudera Manager 4.5 on a CDH 4.2.0-1.cdh4.2.0.p0.10 cluster.
When I try to start the service it fails on all nodes with this message
perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/800-impala-IMPALAD#g' /run/cloudera-scm-agent/process/800-impala-IMPALAD/impala-conf/impalad_flags
'[' impalad = impalad ']'
exec /opt/cloudera/parcels/IMPALA-0.6-1.p0.109/lib/impala/../../bin/impalad --flagfile=/run/cloudera-scm-agent/process/800-impala-IMPALAD/impala-conf/impalad_flags
Could not create logging file: Permission denied
COULD NOT CREATE A LOGGINGFILE 20130326-204959.15015!log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /var/log/impalad/impalad.INFO (Permission denied)
at java.io.FileOutputStream.openAppend(Native Method)
...
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
at org.apache.hadoop.fs.FileSystem.<clinit>(FileSystem.java:92)
+ date
Complete StdErr Log
I'm unsure whether the permission issue is cause of Impala not running or whether something else crashes and the permission issues just comes up because the crash log can not be written.
Any help would be great!
Run impala from debug binaries as described here:
https://issues.cloudera.org/browse/IMPALA-160
Seems to be related to the JVM in Kernel 12.04.1 LTS
Original Answer: https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/impala-user/4MRZYbn5hI0
Related
Environment
Flink-1.4.2
Hadoop 2.6.0-cdh5.13.0 with 4 nodes in service and Security is off.
Ubuntu 16.04.3 LTS
Java 8
Description
I have a Java job in flink-1.4.0 which writes to HDFS in a specific path.
After updating to flink-1.4.2 I'm getting the following error from Hadoop complaining that the user doesn't have write permission to the given path:
WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:xng (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=user1, access=WRITE, inode="/user":hdfs:hadoop:drwxr-xr-x
NOTE:
If I run the same job on flink-1.4.0, Error disappears regardless of what version of flink (1.4.0 or 1.4.2) dependencies I have for job
Also if I run the job main method from my IDE and pass the same parameters, I don't get above error.
Question
Any Idea what's wrong? Or how to fix?
I am getting the following error: (through command line as well as web-interface).
Useful info:
1. Hive, HDFS, Yarn services are up and running.
2. I can even get into hive prompt through command line and web-interface. The error occurs when I use show databases. (or click refresh symbol on database explorer of web-interface).
3. I logged in as root user, hdfs user
4. I tried changing permissions to 755 for the directory /user/root
Any help would be greatly appreciated..
------------------start of error message (copied from that of web-interface log)
Unable to submit statement. Error while processing statement: FAILED: Hive Internal Error: com.sun.jersey.api.client.ClientHandlerException(java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 503, message: Service Unavailable) [ERROR_STATUS].
Step 1) Restart Atlas on Sandbox.
Step 2) Restart Hive services on Sandbox.
For me this resolved the issue.
Cheers
I'm trying to mount my HDFS using the NFS gateway as it is documented here:
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
Unfortunately, following the documentation step by step does not work for me (Hadoop 2.7.1 on CentOS 6.6). When executing the mount command I receive the following error message:
[root#server1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync
server1:/ /hdfsmount/ mount.nfs: mounting server1:/ failed, reason
given by server: No such file or directory
I created the folder hdfsmount so that I can say it definitely exists. My questions are now:
Did anyone faced the same issue as I do?
Do I have to configure the NFS server before I start following the steps in the documentation (e.g. I read about editing /etc/exports).
Any help is highly apreciated!
I found the problem deep in the logs. When executing the command (see below) to start the nfs3 component of HDFS, the executing user needs permissions to delete /tmp/.hdfs-nfs which is configured as nfs.dump.dir in core-site.xml.
If the permissions are not set, you'll receive a log message like:
15/08/12 01:19:56 WARN fs.FileUtil: Failed to delete file or dir
[/tmp/.hdfs-nfs]: it still exists. Exception in thread "main"
java.io.IOException: Cannot remove current dump directory:
/tmp/.hdfs-nfs
Another option is to simply start the nfs component as root.
[root]> /usr/local/hadoop/sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3
I have Hadoop configured in my REDHAT system. I am getting the following error when $HIVE_HOME/bin/hive is executed..
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.checkAndCreate(File.java:1704)
at java.io.File.createTempFile(File.java:1792)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
hive uses a 'metastore'; it creates this directory when you invoke it for the first time. The meta-directory is usually created in the current working directory you are in (i.e. where you are running the hive command)
which dir are you invoking hive command from? Do you have write permissions there?
try this:
cd <--- this will take you to your home dir (you will have write permissions there)
hive
I am getting the following error when I try to run pig -help.
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.checkAndCreate(File.java:1717)
at java.io.File.createTempFile0(File.java:1738)
at java.io.File.createTempFile(File.java:1815)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
Here is my configuration-
Apache Hadoop - 1.0.3
Apache Pig - 0.10.0
OS - Ubuntu 64-bit
User for whom the error is seen - "sumod" this is an admin level account. I have also created directory for him in the HDFS.
User for whom this error is NOT seen - "hadoop". I have created this user for hadoop jobs. He is not an admin user. But he belongs to "supergroup" on HDFS.
The paths are properly set for both the users.
I do not have to start hadoop while running "pig -help" command. I only want to make sure that Pig is installed properly.
I am following Apache doc and my understanding is that I do not have to be hadoop user to run Pig and I can be a general system user.
Why am I getting these errors? What am I doing wrong?
I had seen the same exception error. The reason for me was that the user I was running pig did not have write permission on ${hadoop.tmp.dir}
Please check the permissions of the directory where the pigscript is placed.
Whenever a pigscript is executed, errors are logged in a log file, which is written in your present working directory.
Assume your pigscript is in dir1 and your pwd is dir2 and since you are executing as user sumod; sumod should have write permissions in dir2.