hey hi I have written code and i tried running the jar but i got an exception saying that jar file cannot be opened and i even changed the file permissions and path is also set right but it is still showing me error saying that jar file cannot be opened.
Please suggest some solutions.
that is the error i got and even i have gone through the wordcount example also but could not get any answer.
Is your jar file in local file system or hdfs? The jar file should be in local file system.
Related
After downloadding the jar file I clicked and the message comes as blocked(The file '/home/hadoop/Downloads/hadoop-core-1.2.jar' is not marked as executable. If this was downloaded or copied from an untrusted source, it may be dange
That file is a client library. There is nothing that "clicking" it would run.
However, as the warning says, there could be executable JAR files named anything that are harmful.
If you are trying to actually run a Hadoop cluster, then download the full package from the Apache site (preferrably Hadoop 3.x), rather than one of many JAR files from version 1.2.
I have created a Java Web Start Application. So my application launch from the web browser. I need to run an executable file on MAC OS. So I have packaged my executable file inside my jar file. But I am not able to load it from the classpath as it gives me File or Directory does not exists exception. Then I have tried by extracting the jar file, but as it is launched from the web browser, I am not able to get the location of the jar file. I have seen the path of java cache from the Java Control Panel, and I have searched on that path but not getting anything.
Is there any option to use the executable which resides in jar file without extracting jar file ?
How do I get my downloaded jar file in the system ? I have checked the java control panel path but I am not getting the jar file.
I want to copy the executable to temp directory. How do I ?
Please help me on this.
I started working with Hive just recently, so I may be a little new to this, I compiled a jar using Maven Build and for some reason when I am trying to add it in the hive, it won't work. I get the following error:
Query returned non-zero code: 1, cause: ex-0.0.0.1-SNAPSHOT.jar does not exist.
I uploaded the file using hue, and I can find it if I do dfs -ls in hive.
What am I missing? (I was able to load a jar I got online)
Thanks!
If you can find your jar by -lsing to it and it was properly built, usually this error is cause by incorrectly putting quotes around the path to the jar.
Incorrect:
add jar '/root/complete/path/to/jar.jar';
Correct:
add jar /root/complete/path/to/jar.jar;
I am learning Hadoop with book 'Hadoop in Action' by Chuck Lam. In first chapter the books says that Hadoop installation will have example jar and by running 'hadoop jar hadoop-*-examples.jar' will show all the examples. But when I run the command then it throw error 'Could not find or load main class org.apache.hadoop.util.RunJar'. My guess is that installed Hadoop doesn't have example jar. I have installed 'hadoop-2.1.0-beta.tar.gz' on cygwin on Win 7 laptop. Please suggest how to get example jar.
run following command
hadoop jar PathToYourJarFile wordcount inputPath OutputPath
you can get examples jar file at your hadoop installation directory
What I can suggest here is you should manually go to the Hadoop installation directory and look for a jar name similar to hadoop-examples.jar yourself. Different distribution can have different names for the jar.
If you are in Cygwin, while in the Hadoop Installation directory you can also do a ls *examples*.jar to find the same, narrowing down the file listing to any jar file containing examples as a string.
You can then directly use the jar file name like --
hadoop jar <exampleJarYourFound.jar>
Hope this takes you to a solution.
I've been trying to figure out how execute my Map/Reduce job for almost 2 days now. I keep getting a ClassNotFound exception.
I've installed a Hadoop cluster in Ubuntu using Cloudera CDH4.3.0. The .java file (DemoJob.java which is not inside any package) is inside a folder called inputs and all required jar files are inside inputs/lib.
I followed http://www.cloudera.com/content/cloudera-content/cloudera-docs/HadoopTutorial/CDH4/Hadoop-Tutorial/ht_topic_5_2.html for reference.
I compile the .java file using:
javac -cp "inputs/lib/hadoop-common.jar:inputs/lib/hadoop-map-reduce-core.jar" -d Demo inputs/DemoJob.java
(In the link, it says -cp should be "/usr/lib/hadoop/:/usr/lib/hadoop/client-0.20/". But I don't have those folders in my system at all)
Create jar file using:
jar cvf Demo.jar Demo
Move 2 input files to HDFS
(Now this is where I'm confused. Do I need to move the jar file to HDFS as well? It doesn't say so in the link. But if it is not in HDFS, then how does the hadoop jar .. command work? I mean how does it combine the jar file which is in Linux system and the input files which are in HDFS?)
I run my code using:
hadoop jar Demo.jar DemoJob /Inputs/Text1.txt /Inputs/Text2.txt /Outputs
I keep getting ClassNotFoundException : DemoJob.
Somebody please help.
The class not found exception only means that some class wasn't found when class DemoJob was loaded. The missing class could have been a class referenced (imported, for example) by DemoJob. I think the problem is that you don't have the /usr/lib/hadoop/:/usr/lib/hadoop/client-0.20/ folders (classes) in your class path. It's the classes that should be there but aren't that probably are triggering the class not found exception.
Finally figured out what the problem was. Instead of creating a jar file from a folder, I directly created the jar file from the .class files using jar -cvf Demo.jar *.class
This resolved the ClassNotFound error. But I don't understand why it was not working earlier. Even when I created the jar file from a folder, I did mention the folder name when executing the class file as:hadoop jar Demo.jar Demo.DemoJob /Inputs/Text1.txt /Inputs/Text2.txt /Outputs