Hadoop task tracker - all local directories are not writable - hadoop

I have a 10 node cluster.
When I submit Hive jobs I get the below error -
WARN org.apache.hadoop.mapred.TaskTracker: Task Tracker local Incorrect permission for /data/gomz/mapred/local, expected: rwxr-xr-x, while actual: rwxrwxr-x
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start TaskTracker because org.apache.hadoop.util.DiskChecker$DiskErrorException: all local directories are not writable
at org.apache.hadoop.mapred.TaskTracker.checkLocalDirs(TaskTracker.java:5268)
at org.apache.hadoop.mapred.TaskTracker.initializeDirectories(TaskTracker.java:907)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:979)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:2176)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:5310)
mapred.local.dir in both mapred-site.xml and taskcontroller.cfg point to /data/gomz/mapred/local
For my Hive sessions, I use the following settings:
SET hive.exec.scratchdir=/dev/tmp/hive;
SET hive.metastore.warehouse.dir=/dev/warehouse; (setting works for Hive jobs that do not launch MR)
What other local directories could the error be referring to ?

Can you check permission of /data?
Can you try this command?
sudo chown $USER /data
After executing this command, try again.

Related

Permission denied issue in mapreduce?

I have tried the below query.
hadoop jar /home/cloudera/workspace/para.jar word.Paras examples/wordcount /home/cloudera/Desktop/words/output
map reduce is started after that its showing below error. can anyone please help on this issue.
15/11/04 10:33:57 INFO mapred.JobClient: Task Id : attempt_201511040935_0008_m_000002_0, Status : FAILED
org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
Do I need to change anything config file or in cloudera manager.
The exception suggests that you are trying to write to the HDFS root directory "/" which you (user:cloudera) does not have permission to do.
Without knowing what your specific jar does:
I guess that the last argument ("/home/cloudera/Desktop/words/output") is where you wish to place the output.
I guess this is supposed to be within HDFS where /home does not exist.
Try to change this to somewhere where you can write, possibly "/user/cloudera/words/output"
There are set of default directories to be created before you start using the hadoop cluster,
do, it should show you the directories
$ hadoop fs -ls /
sample user, if you want to run as cloudera you need on hdfs
/user/cloudera -- the user running the program
/user/hadoop -- your hadoop file system user
/user/mapred -- your mapred user
/tmp -- temporary which needs to have permission hdfs chmod 1777
HTH.
The last argument that you are passing should be the output path of HDFS not the default file system.
As you are running with cloudera user, you can point to the /user/cloudera/words/output. But first you need to check whether you have cloudera in your HDFS and you have write permission by issuing the following
hadoop fs -ls /user/
Once you have it change your command to following:
hadoop jar /home/cloudera/workspace/para.jar word.Paras examples/wordcount <path_where_you_have_write_permission_in_HDFS>

Multi-NodeHadoop: NameNode and DataNode not working

I am new student on Hadoop clusters, and I built a multi-node in the lab
But I cannot start NameNode or DataNode.
After I execute start-all.sh and jps: only shows jobtracker, tasktracker, secondenamenode, jps on Master. But slaves works good with datanode and tasktracker
And when I execute stop-all.sh:
it should shows: No tasttracker to stop, but it did show in jps
And this is the log file about NameNode:
1.Cannot access storage directory /app/hadoop/tmp/dfs/name
2.ERROR org.apache.hadoop.hdfs.server![enter image description here][2].namenode.FSNamesystem: FSNamesystem initialization failed.
3.org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
4.org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
And I did try the namenode -format, yet it doesn't work.
Could somebody show me the way, and tell me why this happens?
Lots of thanks ahead.
PS: I am using hadoop1.0.3 + java1.7.0_51
I think you did not give permissions to data dir of tmp.data.dir.
Try bellow command to give permissions and try your start-all.sh once.
sudo chown $USER /(DIR NAME).
And try this command:
hadoop namenode -format

Backup hdfs directory from full-distributed to a local directory?

I'm trying to back up a directory from hdfs to a local directory. I have a hadoop/hbase cluster running on ec2. I managed to do what I want running in pseudo-distributed on my local machine but now I'm fully distributed the same steps are failing. Here is what worked for pseudo-distributed
hadoop distcp hdfs://localhost:8020/hbase file:///Users/robocode/Desktop/
Here is what I'm trying on the hadoop namenode (hbase master) on ec2
ec2-user#ip-10-35-53-16:~$ hadoop distcp hdfs://10.35.53.16:8020/hbase file:///~/hbase
The errors I'm getting are below
13/04/19 09:07:40 INFO tools.DistCp: srcPaths=[hdfs://10.35.53.16:8020/hbase]
13/04/19 09:07:40 INFO tools.DistCp: destPath=file:/~/hbase
13/04/19 09:07:41 INFO tools.DistCp: file:/~/hbase does not exist.
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Failed to createfile:/~/hbase
at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1171)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
You can't use the ~ character in Java to represent the current home directory, so change to a fully qualified path, e.g.:
file:///home/user1/hbase
But i think you're going to run into problems in a fully distributed environment as the distcp command runs a map reduce job, so the destination path will be interpreted as local to each cluster node.
If you want to pull data down from HDFS to a local directory, you'll need to use the -get or -copyToLocal switches to the hadoop fs command

Sqoop Permission Issue when running inside Map Reduce Code

I am trying to invoke Sqoop through a map reduce program using
Sqoop.runTool(arguments,_conf);
When executing, I receive the following error
Exception in thread "main" java.lang.RuntimeException: Could not create temporary directory: /tmp/sqoop-hdfs/compile/a609226c19d65f561dd7035c00d318f6; check for a directory permissions issue on /tmp.
I have set the permissions on /tmp and it's subdirectories in HDFS to 777
I can invoke the same command fine through command line using sudo -u hdfs sqoop ...
This is Cloudera's hadoop distirbution and I am running the job as hdfs user.
This probably isn't the /tmp directory in HDFS, but rather then /tmp directory on the local file system - whats the permissions on that directory (and would also explain why it works when you 'sudo' the command)
Just clean /tmp/sqoop-hdfs/compile floder it works

hadoop mapred job - Error initializing attempt mapred task

I accidentally deleted hadoop.tmp.dir, in my case /tmp/{user.name}/*. Now everytime when I run hive query from CLI, and the mapred job will fail at the task attempt as below:
Error initializing attempt_201202231712_1266_m_000009_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for ttprivate/taskTracker/hdfs/jobcache/job_201202231712_1266/jobToken
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:376)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
at org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:4432)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1301)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1242)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2541)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2505)
It's a test environment, I don't care about the data. How can I get the system back to normal?
you should call stop-all.sh file , recreate the file and start after formatting the tmp file
You can just simple recreate the directory and change the owner of the file to mapred. chown mapred:mapred <your dir>

Resources