I moved tab delimited file into hdfs now was trying to move it to hbase.
Below is my importtsv command
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:ok,cf:ek,cf:rk,cf:rsk,cf:pdk,cf:pmk,cf:omk,cf:sok,cf:sdk,cf:cdk,cf:q,cf:uc,cf:up,cf:usp,cf:gm,cf:st,cf:gp -Dimporttsv.skip.bad.lines=false 'sales_fact' hdfs://localhost:54310/my/file.txt
it is trying to read a jar from location which doesnt exists.
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/home/elijah/Downloads/hbase/lib/htrace-core-3.1.0-incubating.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:738)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:747)
I am not getting why it has mixed up hdfs and local dir path into one.
hdfs://localhost:54310/home/elijah/Downloads/hbase/lib/htrace-core-3.1.0-incubating.jar
User who is running import job has full access to hbase lib on local directory.
I can see -libjars option is missing....You can use -libjars option below is example usage :
hadoop jar \
hbase-server-0.98.6-cdh5.2.1.jar \
importtsv \
-libjars /opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/high-scale-lib-1.1.1.jar \
-Dimporttsv.separator=, -Dimporttsv.bulk.output=output \
-Dimporttsv.columns=HBASE_ROW_KEY,f:count wordcount \
word_count.csv
You can also do something like this:-
# export HADOOP_CLASSPATH=`./hbase classpath`
One of the jar which was missing i.e hbase/lib/htrace-core-3.1.0-incubating.jar will be hbase classpath. and should work in this case.
Related
Trying to run a mapreduce job with compression
hadoop jar \
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar \
randomtextwriter \
-Ddfs.replication=1 -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzoCodec \
/tmp/randomtextwriter
Using parcels distributed lzo to all nodes in the cluster. Even then I am gettin the below error
Getting below error
Error: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec was not found.
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCompressorClass(FileOutputFormat.java:140)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:56)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:659)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1731)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2409)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCompressorClass(FileOutputFormat.java:138)
... 10 more
As a temporary solution you can manually add the hadoop-lzo jar in the hadoop classpath .
curl https://maven.twttr.com/com/hadoop/gplcompression/hadoop-lzo/0.4.19/hadoop-lzo-0.4.19.jar
hadoop jar \ /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar \ randomtextwriter \ -Ddfs.replication=1 -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzoCodec \ /tmp/randomtextwriter --libjars hadoop-lzo-0.4.19.jar
Please make sure you download the compatible version of hadoop-lzo with your hadoop version.
I successfully created and execute sqoop import job, but unable to run it in oozie workflow. Sqoop imports the data from RDBMS to parquet file in HDFS. It seems the problem is related to parquet format. If I use --as-textfile, the workflow run without any problem.
Also, I've copied all parquet-*.jar from SQOOP_HOME/lib to oozie share lib.
Sqoop 1.4.7 and Oozie 4.3.1.
Sqoop job definition
$ sqoop job --create ingest_amsp_custmaster -- import --connect "jdbc:oracle:thin:#<IP>:<PORT>/<SID>" \
--username <USER> -P \
--table CUSTMASTER \
--as-parquetfile \
--target-dir /warehouse/raw/amsp/custmaster \
--delete-target-dir \
-m 1
Here's what I got from error log:
java.lang.IllegalArgumentException: No enum constant com.cloudera.sqoop.SqoopOptions.FileLayout.ParquetFile
at java.lang.Enum.valueOf(Enum.java:238)
at org.apache.sqoop.SqoopOptions.loadProperties(SqoopOptions.java:522)
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.read(HsqldbJobStorage.java:299)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:198)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:283)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:200)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:183)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:64)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:235)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:436)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:350)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:211)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:254)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Intercepting System.exit(1)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Any help would be appreciated.
Thanks. Yusata.
Check version of Sqoop in Oozie workflow. It seems like it is not 1.4.7: stack trace line
org.apache.sqoop.SqoopOptions.loadProperties(SqoopOptions.java:522)
is corresponds to https://github.com/apache/sqoop/blob/20af67ef60096b17e1d9585670e5ec787eb760e2/src/java/org/apache/sqoop/SqoopOptions.java#L522
I have created a input directory and put sample file in it.I have created an output directory also.but at the time of mapreduce program execution i got the below error.Here is my command to execute mapreduce
bin/hdfs dfs -mkdir /input
bin/hdfs dfs -put /home/biswajit/sample.txt /input/
bin/hadoop jar /usr/local/hadoop/hadoop-2.9.0/share/hadoop/mapreduce/units.jar com.hadoop.ProcessUnits /input/sample.txt /output
Error is
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: **hdfs://localhost:54310/home/biswajit/input/sample.txt**
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:294)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:236)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:341)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:333)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:202)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:871)
at com.hadoop.ProcessUnits.main(ProcessUnits.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
$HADOOP_HOME/input doesn't exist on HDFS.
$HADOOP_HOME is a bash variable on your local filesystem.
You only created a directory for /input, so you can either mkdir the full path with the variable, if you want that command to run as-is, or you need to remove the variable when running the JAR file
As long as hdfs dfs -ls /input/* shows some files, then that command looks fine otherwise, but I'm not sure what that Java class is actually expecting as input
Note: there is a difference between
hdfs://localhost:54310/home/biswajit/input
And
hdfs://localhost:54310/input
More specifically, HDFS doesn't have /home folders, so it looks like you're either not running pseudo distributed cluster, or you made that directory yourself.
I have installed hadoop 2.4.1 and hbase 0.98.8 in 2 machines. When I run an hbase mapreduce job I get the below error:
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://pc1/opt/hbase-0.98.8-hadoop2/lib/hbase-server-0.98.8-hadoop2.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at thesis.test2.run(test2.java:93)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at thesis.test2.main(test2.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I can run hadoop mapreduce jobs and simple hbase jobs without any problems. The code I m trying to run is an example that is supposed to run.
Please provide "jps" output.
Because it seems like your hbase is not working , hopefully the problem will be with zookeeper
I faced the exact problem. You have to add the hbase library path to the .bashrc file. Add the lib folder in hbase to the CLASSPATH.
Also, add the classpath of hbase to HADOOP_CLASSPATH.
Your .bashrc file should contain the following:
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`${HBASE_HOME}/bin/hbase classpath`
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`${HBASE_HOME}/bin/hbase mapredcp`
export CLASSPATH=${HBASE_HOME}/lib/*
Note: The CLASSPATH should point to the lib folder of your hbase installation folder. Use the following to compile and run your java code.
javac Example.java
java -classpath $CLASSPATH:. Example
I have hbase 0.94.0. I tried doing bulk import using the importtsv tool.
Here is the command i gave
./hadoop jar /home/ericsson/Desktop/ProjectFiles/hbase-0.94.0/hbase-0.94.0.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,a,b,c,d,e,f,g '-Dimporttsv.separator=,' Test1 /home/ericsson/Desktop/ProjectFiles/inputFiles1/CharginUsage-m-00000
Test1-My table that already exists in Hbase.
/home/ericsson/Desktop/ProjectFiles/inputFiles1/CharginUsage-m-00000- My directory where i have the CSV file.
I got the following error:
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/collect/Multimap
at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Multimap
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 6 more
importtsv task needs Google's Guava library in order run. This library is present under $HBASE_HOME/lib/guava-.jar
It is matter of telling hadoop to fetch this guava jar during execution. Simply you could copy the jar from hbase lib to hadoop lib. A more decent solution is to add this jar path to hadoop classpath or execute the hadoop task with the below command.
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/lib/guava-<version>.jar
OR
export HADOOP_CLASSPATH=`hbase classpath ` /hadoop jar /home/ericsson/Desktop/ProjectFiles/hbase-0.94.0/hbase-0.94.0.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,a,b,c,d,e,f,g '-Dimporttsv.separator=,' Test1 /home/ericsson/Desktop/ProjectFiles/inputFiles1/CharginUsage-m-00000*