Spark/Hadoop can't read root files - hadoop

I'm trying to read a file inside a folder that only me (and root) can read/write, through spark, first I start the shell with:
spark-shell --master yarn-client
then I:
val base = sc.textFile("file///mount/bases/FOLDER_LOCKED/folder/folder/file.txt")
base.take(1)
And got the following error:
2018-02-19 13:40:20,835 WARN scheduler.TaskSetManager:
Lost task 0.0 in stage 0.0 (TID 0, mydomain, executor 1):
java.io.FileNotFoundException: File file: /mount/bases/FOLDER_LOCKED/folder/folder/file.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
...
I am suspecting that as yarn/hadoop was launched with the user hadoop it can't go further in this folder to get the file. How could I solve this?
OBS: This folder can't be open to other users because it has private data.
EDIT1: This /mount/bases is a network storage, using a cifs connection.
EDIT2: hdfs and yarn was launched with the user hadoop

As hadoop was the user that lauched hdfs and yarn, he is the user that will try to open a file in a job, so it must be authorized to access this folder, fortunely hadoop checks what user is executing the job first to allow the access to a folder/file, so you will not take risks at this.

Well, if it would have been access related issue with the file, you would have got 'access denied' as an error. In this particular scenario, I think file that you are trying to read is not present at all, or might have some other name[typos]. Just check for the file name.

Related

Why does h2o require write access on hdfs root directory?

Seeing error message
Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=airflow, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at ...
when trying to connect to start the h2o cluster (h2o-3.28.0.1-hdp3.1). Ie it appears that it does not like that the root hdfs dir hdfs:/// does not have write permissions for my user (and giving write access to my user via ranger does appear to fix the problem), but this seems wrong.
From past experience, I've seen this for case where the launching user does not have write permissions the their own hdfs:///user/<username> folder, but seems odd to me that h2o wants the user to have write access over the entire top level hdfs dir. Is this normal? Can I change this?
Possibly related: Finding that after starting the cluster, can't manually kill in YARN ResourceManager UI or killing the PID, rather need to go to the h2o cluster url and use the admin tab to shutdown the cluster. Any ideas why this would happen?
Found the problem, can't find the docs / other-post-detailing-this right now, but basically, when running the hadoop jar h2odriver.jar ... command, there is an optional param called -output where you would normally put some hdfs location that h2o will write stuff (from what I can recall, this is some legacy directory that is not super important) to.
I had forgotten that this is an HDFS location and put some local temp folder's absolute path. The error was because h2o was trying to create that folder by creating the entire path in hdfs that lead to it, thus requiring being able to write from the hdfs root dir. The correct value would be something like /user/<username>.

NiFi ListHDFS cannot find directory, FileNotFoundException

Have pipeline in NiFi of the form listHDFS->moveHDFS, attempting to run the pipeline we see the error log
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Returning CLUSTER State: StandardStateMap[version=43, values={emitted.timestamp=1525468790000, listing.timestamp=1525468790000}]
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Found new-style state stored, latesting timestamp emitted = 1525468790000, latest listed = 1525468790000
13:29:21 HSTDEBUG01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Fetching listing for /hdfs/path/to/dir
13:29:21 HSTERROR01631000-d439-1c41-9715-e0601d3b971c
ListHDFS[id=01631000-d439-1c41-9715-e0601d3b971c] Failed to perform listing of HDFS due to File /hdfs/path/to/dir does not exist: java.io.FileNotFoundException: File /hdfs/path/to/dir does not exist
Changing the listHDFS path to /tmp seems to run ok, thus making me think that the problem is with my permissions on the directory I'm trying to list. However, changing the NiFi user to a user that can access that directory (eg. hadoop fs -ls /hdfs/path/to/dir) by setting the bootstrap.properties value run.as=myuser and restarting (see https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#bootstrap_properties) still seems to produce the same problem for the directory. The literal dir. string being used that is not working is:
"/etl/ucera_internal/datagov_example/raw-ingest-tracking/version-1/ingest"
Does anyone know what is happening here? Thanks.
** Note: The hadoop cluster I am accessing does not have kerberos enabled (it is a secured MapR hadoop cluster).
Update: It appears that the mapr hadoop implementation is different enough that it requires special steps in order for NiFi to properly work on it (see https://community.mapr.com/thread/10484 and http://hariology.com/integrating-mapr-fs-and-apache-nifi/). May not get a chance to work on this problem for some time to see if still works (as certain requirements have changed), so am dumping the link here for others who may have this problem in the meantime.
Could you once make sure you have entered correct path and directory needs to be exists in HDFS.
It seems to be list hdfs processor not able to find the directory that you have configured in directory property and logs are not showing any permission denied issues.
If logs shows permission denied then you can change the nifi running user in bootstrap.conf and
Once you make change in nifi properties then NiFi needs to restart to apply the changes (or) change the permissions on the directory that NiFi can have access.

Spark - java IOException :Failed to create local dir in /tmp/blockmgr*

I was trying to run a long running Spark Job. After few hours of execution, I get exception below :
Caused by: java.io.IOException: Failed to create local dir in /tmp/blockmgr-bb765fd4-361f-4ee4-a6ef-adc547d8d838/28
Tried to get around it by checking:
Permission issue in /tmp dir. The spark server is not running as root. but /tmp dir should be writable to all users.
/tmp Dir has enough space.
Assuming that you are working with several nodes, you'll need to check every node participate in the spark operation (master/driver + slaves/nodes/workers).
Please confirm that each worker/node have enough disk space (especially check /tmp folder), and right permissions.
Edit: The answer below did not eventually solve my case. It's because some subfolders spark (or some of its dependencies) was able to create, yet not all of them. The frequent necessity of creation of such paths would make any project unviable. Therefore I ran Spark (PySpark in my case) as an Administrator, which solved the case. So in the end it is probably a permission issue afterall.
Original answer:
I solved the same problem I had on my local Windows machine (not a cluster). Since there was no problem with permissions, I created the dir that Spark was failing to create, i.e. I created the following folder as a local user and did not need to change any permissions on that folder.
C:\Users\<username>\AppData\Local\Temp\blockmgr-97439a5f-45b0-4257-a773-2b7650d17142
After verifying all the permissions and user access.
I got the same issue when building the components in Talend studio and it resolved by providing the correct "/" in spark scratch directory (temp directory) in spark Configuration tab. This is required when building the jar in windows and running in Linux cluster.

Hadoop: Pseudo Distributed mode for multiple users

I appreciate your help in advance.
I have setup Hadoop in Pseudo Distributed mode using the root user credentials. I want to provide access to multiple users (let us say hadoop1, hadoop2, etc) to be able to submit and run MapReduce jobs on this cluster. How do we get this done?
What I have done so far?
> - Setup Hadoop to run in Pseudo-distributed mode
> - Used "root" user credentials to set this up.
> - Added users hadoop1 and hadoop2 to a group called "hadoop".
> - Added root also to be part of the group "hadoop".
> - Created a folder called hdfstmp and set this as the path for hadoop.tmp.dir.
> - Started the cluster using bin/start-all.sh
> - Ran MapReduce jobs using hadoop1 and hadoop2 users.
I got the error below:
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at org.apache.hadoop.util.RunJar.main(RunJar.java:119)
To overcome this error, I gave group "hadoop" rwx permissions to folder hdfstmp. The permissions on this folder look like drwxrwxr-x.
Submitted MapReduce jobs using hadoop1 and hadoop2 users login. The job ran fine without any errors.
However, if I do a stop-all.sh and then do a start-all.sh, the DataNode (and occassionally even NameNode) does not start up. When I check the logs, i see an error as below:
2013-09-21 16:43:54,518 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /data/hdfstmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
Now, without change to the group ownership of the hdfstmp directory, my MR jobs submitted by different users do not run. But when the NameNode gets restarted, i get the issue as above.
How do i overcome this issue? What is the best practice for the same?
Also, is there a way to monitor the jobs that are being submitted by the different users? I am assuming the Web UI should allow me to do this. Please confirm.
I appreciate any assistance you can provide me on this issue. Thanks.
Regards
Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop. While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc).
#addgroup hadoop
#adduser --ingroup hadoop hadoop1
#adduser --ingroup hadoop hadoop2
This will add the user hduser and the group hadoop to your local machine.
Change permission of your hadoop installed directory
chown -R hduser:hadoop hadoop
And lastly change hadoop temporary directoy permission
If your temp directory is /app/hadoop/tmp
#mkdir -p /app/hadoop/tmp
#chown hduser:hadoop /app/hadoop/tmp
and if you want to tighten up security, chmod from 755 to 750...
#chmod 750 /app/hadoop/tmp

hadoop mapred job - Error initializing attempt mapred task

I accidentally deleted hadoop.tmp.dir, in my case /tmp/{user.name}/*. Now everytime when I run hive query from CLI, and the mapred job will fail at the task attempt as below:
Error initializing attempt_201202231712_1266_m_000009_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for ttprivate/taskTracker/hdfs/jobcache/job_201202231712_1266/jobToken
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:376)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
at org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:4432)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1301)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1242)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2541)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2505)
It's a test environment, I don't care about the data. How can I get the system back to normal?
you should call stop-all.sh file , recreate the file and start after formatting the tmp file
You can just simple recreate the directory and change the owner of the file to mapred. chown mapred:mapred <your dir>

Resources