Seeing error message
Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=airflow, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at ...
when trying to connect to start the h2o cluster (h2o-3.28.0.1-hdp3.1). Ie it appears that it does not like that the root hdfs dir hdfs:/// does not have write permissions for my user (and giving write access to my user via ranger does appear to fix the problem), but this seems wrong.
From past experience, I've seen this for case where the launching user does not have write permissions the their own hdfs:///user/<username> folder, but seems odd to me that h2o wants the user to have write access over the entire top level hdfs dir. Is this normal? Can I change this?
Possibly related: Finding that after starting the cluster, can't manually kill in YARN ResourceManager UI or killing the PID, rather need to go to the h2o cluster url and use the admin tab to shutdown the cluster. Any ideas why this would happen?
Found the problem, can't find the docs / other-post-detailing-this right now, but basically, when running the hadoop jar h2odriver.jar ... command, there is an optional param called -output where you would normally put some hdfs location that h2o will write stuff (from what I can recall, this is some legacy directory that is not super important) to.
I had forgotten that this is an HDFS location and put some local temp folder's absolute path. The error was because h2o was trying to create that folder by creating the entire path in hdfs that lead to it, thus requiring being able to write from the hdfs root dir. The correct value would be something like /user/<username>.
Related
Have pipeline in NiFi of the form listHDFS->moveHDFS, attempting to run the pipeline we see the error log
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Returning CLUSTER State: StandardStateMap[version=43, values={emitted.timestamp=1525468790000, listing.timestamp=1525468790000}]
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Found new-style state stored, latesting timestamp emitted = 1525468790000, latest listed = 1525468790000
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Fetching listing for /hdfs/path/to/dir
13:29:21 HSTERROR01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Failed to perform listing of HDFS due to File /hdfs/path/to/dir does not exist: java.io.FileNotFoundException: File /hdfs/path/to/dir does not exist
Changing the listHDFS path to /tmp seems to run ok, thus making me think that the problem is with my permissions on the directory I'm trying to list. However, changing the NiFi user to a user that can access that directory (eg. hadoop fs -ls /hdfs/path/to/dir) by setting the bootstrap.properties value run.as=myuser and restarting (see https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#bootstrap_properties) still seems to produce the same problem for the directory. The literal dir. string being used that is not working is:
"/etl/ucera_internal/datagov_example/raw-ingest-tracking/version-1/ingest"
Does anyone know what is happening here? Thanks.
** Note: The hadoop cluster I am accessing does not have kerberos enabled (it is a secured MapR hadoop cluster).
Update: It appears that the mapr hadoop implementation is different enough that it requires special steps in order for NiFi to properly work on it (see https://community.mapr.com/thread/10484 and http://hariology.com/integrating-mapr-fs-and-apache-nifi/). May not get a chance to work on this problem for some time to see if still works (as certain requirements have changed), so am dumping the link here for others who may have this problem in the meantime.
Could you once make sure you have entered correct path and directory needs to be exists in HDFS.
It seems to be list hdfs processor not able to find the directory that you have configured in directory property and logs are not showing any permission denied issues.
If logs shows permission denied then you can change the nifi running user in bootstrap.conf and
Once you make change in nifi properties then NiFi needs to restart to apply the changes (or) change the permissions on the directory that NiFi can have access.
I am new to Hadoop and am trying to execute the WordCount Problem.
Things I did so far -
Setting up the Hadoop Single Node cluster referring the below link.
http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
Write the word count problem referring the below link
https://kishorer.in/2014/10/22/running-a-wordcount-mapreduce-example-in-hadoop-2-4-1-single-node-cluster-in-ubuntu-14-04-64-bit/
Problem is when I execute the last line to run the program -
hadoop jar wordcount.jar /usr/local/hadoop/input /usr/local/hadoop/output
Following is the error I get -
The directory seems to be present
The file is also present in the directory with contents
Finally, on a side note I also tried the following directory sturcture in the jar command.
No avail! :/
I would really appreciate if someone could guide me here!
Regards,
Paul Alwin
Your first image is using input from the local Hadoop installation directory, /usr
If you want to use that data on your local filesystem, you can specify file:///usr/...
Otherwise, if you're running pseudo distributed mode, HDFS has been setup, and /usr does not exist in HDFS unless you explicitly created it there.
Based on the stacktrace, I believe the error comes from the /app/hadoop/ staging directory path not existing, or the permissions for it are not allowing your current user to run commands against that path
Suggestion: Hortonworks and Cloudera offer pre-built VirtualBox images and lots of tutorial resources. Most companies will have Hadoop from one of those vendors, so it's better to get familiar with that rather than mess around with having to install Hadoop yourself from scratch, in my opinion
I'm trying to read a file inside a folder that only me (and root) can read/write, through spark, first I start the shell with:
spark-shell --master yarn-client
then I:
val base = sc.textFile("file///mount/bases/FOLDER_LOCKED/folder/folder/file.txt")
base.take(1)
And got the following error:
2018-02-19 13:40:20,835 WARN scheduler.TaskSetManager:
Lost task 0.0 in stage 0.0 (TID 0, mydomain, executor 1):
java.io.FileNotFoundException: File file: /mount/bases/FOLDER_LOCKED/folder/folder/file.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
...
I am suspecting that as yarn/hadoop was launched with the user hadoop it can't go further in this folder to get the file. How could I solve this?
OBS: This folder can't be open to other users because it has private data.
EDIT1: This /mount/bases is a network storage, using a cifs connection.
EDIT2: hdfs and yarn was launched with the user hadoop
As hadoop was the user that lauched hdfs and yarn, he is the user that will try to open a file in a job, so it must be authorized to access this folder, fortunely hadoop checks what user is executing the job first to allow the access to a folder/file, so you will not take risks at this.
Well, if it would have been access related issue with the file, you would have got 'access denied' as an error. In this particular scenario, I think file that you are trying to read is not present at all, or might have some other name[typos]. Just check for the file name.
I was trying to run a long running Spark Job. After few hours of execution, I get exception below :
Caused by: java.io.IOException: Failed to create local dir in /tmp/blockmgr-bb765fd4-361f-4ee4-a6ef-adc547d8d838/28
Tried to get around it by checking:
Permission issue in /tmp dir. The spark server is not running as root. but /tmp dir should be writable to all users.
/tmp Dir has enough space.
Assuming that you are working with several nodes, you'll need to check every node participate in the spark operation (master/driver + slaves/nodes/workers).
Please confirm that each worker/node have enough disk space (especially check /tmp folder), and right permissions.
Edit: The answer below did not eventually solve my case. It's because some subfolders spark (or some of its dependencies) was able to create, yet not all of them. The frequent necessity of creation of such paths would make any project unviable. Therefore I ran Spark (PySpark in my case) as an Administrator, which solved the case. So in the end it is probably a permission issue afterall.
Original answer:
I solved the same problem I had on my local Windows machine (not a cluster). Since there was no problem with permissions, I created the dir that Spark was failing to create, i.e. I created the following folder as a local user and did not need to change any permissions on that folder.
C:\Users\<username>\AppData\Local\Temp\blockmgr-97439a5f-45b0-4257-a773-2b7650d17142
After verifying all the permissions and user access.
I got the same issue when building the components in Talend studio and it resolved by providing the correct "/" in spark scratch directory (temp directory) in spark Configuration tab. This is required when building the jar in windows and running in Linux cluster.
I have been able to set up the streaming example with python mapper & reducer. The mapred folder location is /mapred/local/taskTracker
both root & mapred users have the ownership to this folder & sub folders
however when I run my streaming it creates maps but no reduces and gives the following error
Cannot Run Program
/mapred/local/taskTracker/root/jobcache/job_201303071607_0035/attempt_201303071607_0035_m_000001_3/work/./mapper1.py
Permission Denied
I noticed that though it have provided a+rwx permission to mapred/local/taskTracker and all its sub directories, when mapreduce creates the temp folders for this job, the folders do not have the rwx for all users ...and hence I get the permission denied error
I have been looking for forum threads on this, and though there are threads mentioning the same error ...I could not find any responses with resolutions.
any help would be greatly appreciated
I assume that you run your Hadoop daemons as user root. In this case the permissions of newly created files are determined by the umask of user root. However you must not change the umask for root.
If you'd like to run MapReduce jobs and cluster as different users, it would be better to start the Hadoop daemons as user hadoop and the MapReduce jobs as user mapreduce. However both users should belong to the same group, i.e. hadoop. Furthermore the umask for user hadoop shall be set accordingly.