can't run a MapReduce Job on Hadoop - hadoop

I'm trying to run a mapreduce job took from the internet. This job takes in input a 'points.dat' file and makes a k-means clustering on it. It should produce a file 'centroids.dat' and a file with points matched to their own centroid. A couple of months this was working, but now i'm trying to re-execute on a new installation.
I made
bin/hdfs dfs -copyFromLocal ..//..//../home/<myusername>/Downloads/points.dat
Everything is fine and the file appears in the web service tool in the /user// path on hdfs . Jps is ok
The jar requests args:
<input> <output> <n clusters>
so i made
bin/hadoop jar ../../../home/<myusername>/Downloads/kmeans.jar /user/<myusername>/ /out 3
it creates a "centroids.dat" file in /user/ and a out/ directory. As much as i can understand it tries to re-read "centroids.dat" to execute. So it ends with some failures like
"failed creating symlink /tmp/hadoop-<myusername>/mapred/local/1466809349241/centroids.dat <- /usr/local/hadoop/centroids.dat
So java raise a FileNotFoundException
I tried to shorten the question as much as possible. If more info are needed, no problem for me

I think you are missing to mention main class in your command
bin/hadoop jar kmeans.jar MainClass input output

Related

HADOOP Error: Could not find or load main class org.apache.hadoop.fs.FsShell

I'm building a wordcounter program and I want to create a working directory in the HDFS, but when I execute hdfs dfs -mkdir wordcount or other commands from hdfs dfs command list it returns me Error: Could not find or load main class org.apache.hadoop.fs.FsShell. Google have told me that maybe it is a problem with path variable, but i checked it and it's ok. Thank you!
The error means hadoop classpath command has issues, not PATH.
And you don't need HDFS to run or learn MapReduce / Spark WordCount code. It works fine on your local filesystem as well

Why MR2 map task is running under 'yarn' user and not under user I ran hadoop job?

I'm trying to run mapreduce job on MR2, Hadoop ver. 2.6.0-cdh5.8.0. Job has relative path to directory which has a lot of files to be compressed based on some criteria(not really necessary for this question). I'm running my job as following:
sudo -u my_user hadoop jar my_jar.jar com.example.Main
There is a folder on HDFS under path /user/my_user/ with files. But when I'm running my job I got following exception:
java.io.FileNotFoundException: File /user/yarn/<path_from_job> does not exist.
I'm migrating this job from MR1 where this job is working correctly. My suggestion is this is happening due to YARN, because each container started under YARN user. In my job configuration I've tried to set mapreduce.job.user.name="my_user" but this didn't help.
I've found ${user.home} usage in me Job configuration, but I don't know aware where it is set and is it possible to change this.
The only solution I found so far is to provide absolute path to folder. Is there any other way around, because I feel like this is not correct approach.
Thank you

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException:Input path does not exist: hdfs:host/user/yogesh/WordCount

I have created the input text file test.txt and put it to HDFS as /user/yogesh/Input/test.txt
Created output path on HDFS as /user/yogesh/Output
Created the jar file on local /home/yogesh/WordCount.jar and submitted MR job from local, like that: hadoop jar /home/yogesh/WordCount.jar WordCount /user/yogesh/Input/test.txt /user/yogesh/Output/output1
I have got following error:
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException:Input path does not exist: hdfs:host/user/yogesh/WordCount.
hdfs:host/user/yogesh/ - is my HDFS directory. I am not able to understand why this MR job looking for code in HDFS and how to solve this error.
Try giving the name package of the class WordCount as its prefix, or just skip the class and use just jar, input, output, like that:
hadoop jar /home/yogesh/WordCount.jar /user/yogesh/Input /user/yogesh/Output/output1
Also, make sure that /user/yogesh/Output/output1 does not exist prior to the execution of this command. Also, notice that you should give an input directory and not an input file. Hadoop will take as input all the files in the specified directory.
For an example, see how the WordCount example is run, in this site.

Output Folders for Amazon EMR

I want to jun a custom jar, whose main class a chain of map reduce jobs, with the output of the first job going as the input of the second jar, and so on.
What do I set in FileOutputFormat.setOutputPath("what path should be here?");
If I specify -outputdir in the argument, I get the error FileAlraedy exists. If I don't specify, then I do not know where the ouput will land. I want to be able to see the ouput from every job of the chained mapreduce jobs.
Thanks in adv. Pls help!
You are likely getting the "FileAlraedy exists" error because that output directory exists prior to the job you are running. Make sure to delete the directories that you specify as output for your Hadoop jobs; otherwise you will not be able to run those jobs.
Good practice is to take output from command line as it will increase flexibility of your code And you will compile your jar only once provided the changes are related to your path.
for EMR if you launch your cluster and compile your jar
For eg.
dfs_ip_folder=HDFS_IP_DIR
dfs_op_folder=HDFS_OP_DIR
hadoop jar hadoop-examples-*.jar wordcount ${dfs_ip_folder} ${dfs_op_folder}
Note : you have to create dfs_ip_folder and store input data inside it.
dfs_op_folder will be created automatically on HDFS not on local file system
To access the HDFS op folder either you can copy it to local file system or you can do cat.
eg.
hadoop fs -cat ${dfs_op_folder}/<file_name>
hadoop fs -copyToLocal ${dfs_op_folder} ${your_local_input_dir_path}

hadoop - Where are input/output files stored in hadoop and how to execute java file in hadoop?

Suppose I write a java program and i want to run it in Hadoop, then
where should the file be saved?
how to access it from hadoop?
should i be calling it by the following command? hadoop classname
what is the command in hadoop to execute the java file?
The simplest answers I can think of to your questions are:
1) Anywhere
2,3,4)$HADOOP_HOME/bin/hadoop jar [path_to_your_jar_file]
A similar question was asked here Executing helloworld.java in apache hadoop
It may seem complicated, but it's simpler than you might think!
Compile your map/reduce classes, and your main class into a jar. Let's call this jar myjob.jar.
This jar does not need to include the Hadoop libraries, but it should include any other dependencies you have.
Your main method should set up and run your map/reduce job, here is an example.
Put this jar on any machine with the hadoop command line utility installed.
Run your main method using the hadoop command line utility:
hadoop jar myjob.jar
Hope that helps.
where should the file be saved?
The data should be saved in "hdfs". You will want to probably load it into the cluster from your data source using something like Apache Flume. The file can be placed anywhere but most home is /user/hadoop/
how to access it from hadoop?
SSH into the hadoop cluster headnode like a standard linux server.
To list your hadoop root hdfs
hadoop fs -ls /
should i be calling it by the following command? hadoop classname
You should be using the hadoop command to access your data and run your programs, try hadoop help
what is the command in hadoop to execute the java file?
hadoop -jar MyJar.jar com.mycompany.MainDriver arg[0] arg[1] ...

Resources