Using Hadoop version 0.20.. I am creating a chain of jobs job1 and job2 (mappers of which are in x.jar, there is no reducer) , with dependency and submitting to hadoop cluster using JobControl. Note I have setJarByClass and getJar gives the correct jar file, when checked before submission.
Submission goes through and there seem to be no errors in user logs and jobtracker. But I dont see my Mapper getting executed (no sysouts or log output), but default output seems to be coming to the output folder (input file is read as is and output). I am able to run the job directly using x.jar, but I am really out of clues as to why it is not running with Jobcontrol.
Please help !
This issue bugged me for quite some days. Finally I found that it was the UsedGenericOptionsParser which created the issue. Set this to true and everything started working fine.
Related
So I'm in this situation, where I'm modifying the mapred-site.xml and specific configuration files of different schedulers for Hadoop, and I just want to make sure that the modifications I have made to the default scheduler(FIFO), has actually taken place.
How can I check the scheduler applied to a job or a queue of jobs already submitted to hadoop using job id ?
Sorry if this doesn't make that much sense, but I've looked around quite extensively to wrap my head around it, and read many documentations, and yet I still cannot seem to find this fundamental piece of information.
I'm simply trying the word count as a job, changing scheduler settings in mapped-site.xml and yarn-site.xml.
For instance I'm changing property "yarn.resourcemanager.scheduler.class" to "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler" based on this link : see this
I'm also moving appropriate jar files specific to the schedulers to the correct directory.
For your reference, I'm using the "yarn" runtime mode, and Cloudera and Hadoop 2.
Thanks a ton for your help
When I run a hadoop job with the hadoop application it prints a lot of stuff. Among them, It show the relative progress of the job ("map: 30%, reduce: 0%" and stuff like that). But, when running a job without the application it does not print anything, not even errors. Is there a way to get that level of logging without the application? That is, without running [hadoop_folder]/bin/hadoop jar <my_jar> <indexer> <args>....
You can get this information from Application Master (assuming you use YARN and not MR1 where you would get it from Job Tracker). There is usually web UI where you can find this information. Details will depend on your Hadoop installation / distribution.
In case of Hadoop v1 check Job tracker web URL and in case of Hadoop v2 check Application Master web UI
I install Hadoop MultiNode cluster based on this link http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup/
then I try to run wordcount example in my environment, but when I access to Resource Manager http://HadoopMaster:8088 to see the job's details, no records show in UI.
I also search this problem, one guy give the solution like that Hadoop is not showing my job in the job tracker even though it is running but in my case, I'm just running hadoop's example, in which wordcount also didn't add any extra configuration for yarn.
Anyone has install successfully Hadoop2 Muiltinode and Hadoop web UI works correctly can help me about this issue or can give a link to install correctly.
Whether you got the output of word-count job?
I am testing hadoop, as of now I have :
1) localhost:8088 working
2) localhost:50070 working
3) I created a few files on hdfs
Then I launch pig, and do a LOAD on a file, and then a FILTER, and then a DUMP.
When I DUMP, pig display info about the starting of the mapreduce.
It ends with a sentence like :
"MapReduceLauncher - 0% complete" + "Running Jobs are [job_xxx]".
So I think the job is launched. I even see it as an ACCEPTED App on the hadoop interface on localhost:8088. But then nothing happens : it is stucked at 0% complete :-(
So, the job is "ACCEPTED" but never goes to RUNNING :-(
Should I do something to make my grunt/pig command lines instructions to run ??
Thanks.
JR.
PS: I can't make any copy paste from my job environment.
I unblocked the situation while getting aware that my hard drive was 90% full. At this level, hadoop refuses to write anymore logs. I just had to delete some (big !) files to get it running again...
in last weeks, we use hadoop streaming to calculate some reports everyday. Recently we made a change to our program, if the input size is smaller than 10MB, we will set mapred.job.tracker=local in the JobConf, then the job will run locally.
But last night, many jobs failed, with status 3 returned by runningJob.getJobState().
I don't know why, and there is nothing in the stderr.
I can google nothing related about this question. So I'm wondering if I should use mapred.job.tracker=local in production mode? Maybe it's just a debug solution in developing supplied by hadoop.
Has anyone know something about it? Anything, Any infomation, Thank you.
I believe setting mapred.job.tracker=local has nothing to do with your error as local is the default value.
This config parameter defines the host and port that the MapReduce job tracker runs at. If it is set to be "local", then jobs are run in-process as a single map and reduce task.
Refer here.