Export Avro file using Sqoop to Oracle - sqoop

We are using hortonworks distro and on sqoop 1.4.6. I am trying to load an Oracle table with an Avro file using sqoop export but fails with the following stacktrace
Error: java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: Can't parse input data: 'Objavro.schema��{"type":"record"'
at HADOOP_CLAIMS.__loadFromFields(HADOOP_CLAIMS.java:208)
at HADOOP_CLAIMS.parse(HADOOP_CLAIMS.java:156)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException
at java.math.BigDecimal.<init>(BigDecimal.java:470)
at java.math.BigDecimal.<init>(BigDecimal.java:739)
at HADOOP_CLAIMS.__loadFromFields(HADOOP_CLAIMS.java:205)
Looks like its trying to use TextExportMapper instead of AvroExportMapper. I found this issue that should have addressed this problem in version 1.4.5 - https://issues.apache.org/jira/browse/SQOOP-1283
Any idea why this is continuing to happen in sqoop 1.4.6? Is there a different component that needs to be patched as well?

Related

My Inputformat on hive cannot find when calling MapReduce

I'm using CDH cluster.
After writing an Inputformat, I copied it to all HiveServer2 libs and re-start HiveServer2.
When I create a table using my inputformat and write a common query like 'select *', it can run and everything is right!
BUT!
When I write a query which should using MapReduce like 'where' or 'join' or 'count', it will throw a Class Not Found error:
Error: java.io.IOException: cannot find class
com.my.parquet.MyParquetInputFormat at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:714)
at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Wish you helps!

Class not found in a MapReduce job

I have a mapreduce job which takes an avro file as input. I export it along with all the required libraries (jar library files) into a jar file. I have 2 different clusters, one is HDInsight simulator and the other one in HDP sandbox. It works fine on the HDP sandbox but it gives me an error on the HDInsight simulator and cannot find AvroInputFormat class. I tried running the job with the -libjar option but it didn't help. Here is the error message:
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1927)
at org.apache.hadoop.mapred.JobConf.getInputFormat(JobConf.java:686)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:168)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1919)
... 9 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.avro.mapred.AvroInputFormat not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
... 10 more
This looks weired because it runs fine on one cluster! Does anyone know what can be the problem?

Cassandra : Caused by: InvalidRequestException(why:Invalid restrictions)

I am very new to cassandra and using 1.2.10. I have a primary key column which is of timestamp datatype. Now I am trying retrieve the data for the date ranges. Since we know we can't use between in cassandra, I am using greater than(<) and less than(>) to get the date ranges. This perfectly seems to work in cassandra's cqlsh. But with pig_cassandra integration, here is the load function.
cql://<keyspace>/<columnfamily>?where_clause=time1%3E1357054841000590+and+time1%3C1357121822000430"
And here is the error it throws.
2013-12-18 04:32:51,196 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Unable to recreate exception from backed error: java.lang.RuntimeException
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:651)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:352)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.computeNext(CqlPagingRecordReader.java:275)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.getProgress(CqlPagingRecordReader.java:181)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.getProgress(PigRecordReader.java:169)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: InvalidRequestException(why:Invalid restrictions found on time1)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
... 17 more
Any help is highly appreciated!!!
We seem to have fixed the problem by modifying the cassandra source, compiling and then adding the obtained jar file in the lib folder.

Trying to load indexed LZO file using LzoPigStorage and elephant-bird

I've got a log file with default LZO compression and an .index file generated using Hadoop-LZO, but when I run a simple Pig file to retrieve the top 100 records using LzoPigStorage, I get the following exception:
Message: Unexpected System Error Occured: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:130)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:724)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:257)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
... 3 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at com.twitter.elephantbird.mapreduce.input.LzoInputFormat.listStatus(LzoInputFormat.java:55)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:269)
at com.twitter.elephantbird.mapreduce.input.LzoInputFormat.getSplits(LzoInputFormat.java:111)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:452)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:469)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:366)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1269)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1266)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336)
I am running Hadoop 2.0, Pig 0.11, and elephant-bird 2.2.3
I don't use elephant-bird, so I'm not entirely sure that this is the problem.
But looking at their build for v2.2.3, it was compiled against hadoop-0.20.2 and pig-0.9.2. I've seen problems with UDFs in Pig when running on a newer version than what the UDF was compiled against.
Are you able to upgrade elephant-bird to a newer version or recompile against the correct libraries?

CDH4 Hbase using Pig ERROR 2998 java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter

I am using CDH4 in a pseudo-distributed mode and I have some trouble working with HBase and Pig together (but both work fine alone).
I am following step by step this nice tutorial:
http://blog.whitepages.com/2011/10/27/hbase-storage-and-pig/
So my Pig-script looks like this
register /usr/lib/zookeeper/zookeeper-3.4.3-cdh4.1.2.jar
register /usr/lib/hbase/hbase-0.92.1-cdh4.1.2-security.jar
register /usr/lib/hbase/lib/guava-11.0.2.jar
raw_data = LOAD 'input.csv' USING PigStorage( ',' ) AS (
listing_id: chararray,
fname: chararray,
lname: chararray );
STORE raw_data INTO 'hbase://sample_names' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage ('info:fname info:lname');
But upon entering following command
pig -x local hbase_sample.pig
I get following error message
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org/apache/hadoop/hbase/filter/Filter
Main reason I found online would be the classpaths, so here's a list of the current configuration, maybe you find some nonsense in my configuration:
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_HOME=/usr/lib/hbase
export HBASE_CONF_DIR=/etc/hbase/conf
export PIG_HOME=/usr/lib/pig
export PIG_CONF_DIR=/etc/pig/conf
export PATH="$HADOOP_HOME/bin:$HBASE_HOME/bin:$HADOOP_MAPRED_HOME/bin:$PIG_HOME/bin:$PATH"
export HADOOP_CLASSPATH="$HBASE_HOME/bin"
export PIG_CLASSPATH="$HBASE_HOME/bin:$PIG_HOME/bin"
If you need more details, here's the complete pig stack trace:
Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. org/apache/hadoop/hbase/filter/Filter
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:478)
at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:508)
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791)
at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780)
at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583)
at org.apache.pig.parser.LogicalPlanGenerator.store_clause(LogicalPlanGenerator.java:6225)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1335)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1594)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1545)
at org.apache.pig.PigServer.registerQuery(PigServer.java:545)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:430)
at org.apache.pig.Main.main(Main.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.filter.Filter
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 28 more
================================================================================
Your PIG_CLASSPATH is wrong, it should look like the following:
export PIG_CLASSPATH=”`hbase classpath`:$PIG_CLASSPATH”
This will add your missing hbase-related jars to your classpath for Pig.

Resources