Sqoop export from HDFS to Teradata - hadoop

In sqoop export fro HDFS to teradata, i am facing following error how to fix the issue
INFO mapreduce.Job: Task Id : attempt_1435465700866_0006_m_000001_2, Status : FAILED
Error: com.teradata.connector.common.exception.ConnectorException: Batch insert job failed
Command i am using is
sqoop export --connect jdbc:teradata://x.x.x.x/DATABASE=university -username dbc -password dbc --input-fields-terminated-by ',' -table <table_name> --num-mappers 100 -export-dir <path>

Have you tried to export the same table multiple times. Teradata causes issues when duplicate data is exported to it repeatedly.
I also had a issue with teradata cause i tried to export same table multiple times and it started throwing errors.
Please try to reinstall teradata or try making some space in your teradata database.

Related

Sqoop import job error org.kitesdk.data.ValidationException for Oracle

Sqoop import job for Oracle 11g fails with error
ERROR sqoop.Sqoop: Got exception running Sqoop:
org.kitesdk.data.ValidationException: Dataset name
81fdfb8245ab4898a719d4dda39e23f9_C46010.HISTCONTACT is not
alphanumeric (plus '_')
here's the complete command:
$ sqoop job --create ingest_amsp_histcontact -- import --connect "jdbc:oracle:thin:#<IP>:<PORT>/<SID>" --username "c46010" -P --table C46010.HISTCONTACT --check-column ITEM_SEQ --target-dir /tmp/junk/amsp.histcontact -as-parquetfile -m 1 --incremental append
$ sqoop job --exec ingest_amsp_histcontact
it's an incremental import with parquet format. Surprisingly, it works pretty well if I use another format like --as-textfile.
This is similar issue with Sqoop job fails with KiteSDK validation error for Oracle import
But I've used ojdbc6 and switched to ojdbc7 doesn't work as well.
Sqoop version: 1.4.7
Oracle version: 11g
Thanks,
Yusata
I know it is kind of late but I faced the same problem and I solved it by omitting parquet file option.
Try running the job without
-as-parquetfile
There's a workaround, omitting "." character in --table parameter works for me, so instead of --table <schema>.<table_name>, I use --table <table_name>. But this doesn't work if you import a table from another schema in Oracle.
The problem is "." in --target-dir option. Workaround here: Change target dir to "/tmp/junk/amsp_histcontact". When sqoop job finishes, rename the hdfs target dir to "/tmp/junk/amsp.histcontact"

Sqoop create hive table ERROR - Encountered IOException running create table job

I am running sqoop on a Centos7 Machine that has hadoop/map reduce and hive already installed. I read from a tutorial that when importing data from a RDBMS (SQL Server in my case) to HDFS I need to run the next commands :
sqoop import -Dorg.apache.sqoop.splitter.allow_text_splitter=true --connect 'jdbc:sqlserver://hostname;database=databasename' --username admin --password admin123 --table tableA
Everything works perfectly with this step. The next step is creating a hive table that has the same structure as the RDBMS (SQL Server in my case) and using a sqoop command :
sqoop create-hive-table --connect 'jdbc:sqlserver://hostname;database=databasename' --username admin --password admin123 --table tableA --hivetable hivetablename --fields-terminated-by ','
However, whenever I run the above command I get the next error :
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang
/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
18/04/01 19:37:52 ERROR ql.Driver: FAILED: Execution Error, return code 1
from org.apache.hadoop.hive.ql.exec.DDLTask.
com.fasterxml.jackson.databind.ObjectMapper.readerFor(Ljava/lang
/Class;)Lcom/fasterxml/jackson/databind/ObjectReader;
18/04/01 19:37:52 INFO ql.Driver: Completed executing
command(queryId=hadoop_20180401193745_1f3cf07d-ca16-40dd-
8f8d-1e426ecd5860); Time taken: 0.212 seconds
18/04/01 19:37:52 INFO conf.HiveConf: Using the default value passed in
for log id: 0813b5c9-f374-4920-b8c6-b8541449a6eb
18/04/01 19:37:52 INFO session.SessionState: Resetting thread name to
main
18/04/01 19:37:52 INFO conf.HiveConf: Using the default value passed in
for log id: 0813b5c9-f374-4920-b8c6-b8541449a6eb
18/04/01 19:37:52 INFO session.SessionState: Deleted directory: /tmp/hive
/hadoop/0813b5c9-f374-4920-b8c6-b8541449a6eb on fs with scheme hdfs
18/04/01 19:37:52 INFO session.SessionState: Deleted directory: /tmp/hive
/java/hadoop/0813b5c9-f374-4920-b8c6-b8541449a6eb on fs with scheme file
18/04/01 19:37:52 ERROR tool.CreateHiveTableTool: Encountered IOException
running create table job: java.io.IOException: Hive CliDriver exited with
status=1
I am not a java expert but I would like to know if you have any idea of this result?
I've faced the same issue. It seems that there are some compatibility issues between my versions of sqoop (1.4.7) and hive (2.3.4).
The problem raises from the version of the jackson-* jar files within $SQOOP_HOME/lib: some of them are too old for hive because we need versions older than 2.6.
The solution that I found was to replace the following files in $SQOOP_HOME/lib by their counterpart in $HIVE_HOME/lib:
jackson-core-*.jar
jackson-databind-*.jar
jackson-annotations-*.jar
They are all from versions 2.6+ and this seems to work. Not sure it's good practice though.
I was facing the same issue and I have downgraded my hive to 1.2.2 and it works. That will solve the issue.
But not really sure if you want to use Sqoop with only hive2.
Instead of writing two different statements, you can put the whole thing in one statement, which will fetch the data from sql server and then create a HIVE table too.
sqoop import -Dorg.apache.sqoop.splitter.allow_text_splitter=true --connect 'jdbc:sqlserver://hostname;database=databasename' --username admin --password admin123 --table tableA --hive-import --hive-overwrite --hive-table hivetablename --fields-terminated-by ',' --hive-drop-import-delims --null-string '\\N' --null-non-string '\\N'
For this please check the jackson-core, jackson-databind and jackson-annotation jar. The jar should be of the latest version. Usually it comes due to the older version. Place these jar inside the hive lib and sqoop lib. Along with please check the libthrift jar, both in hive and hbase it should be same and copy the same in sqoop lib

Error with sqoop import from mysql to hbase

I started learning sqoop recently with cloudera CDH5 VM.
I created mysql table from a CSV file having columns baseid, date, cars, kms.
Database used: mysql
Table created: uberdata
In hbase shell, I created with table name --myuberdatatable and column family --uber_details.
I checked with scan command and got to see empty table with 0 rows.
To Transfer the data from my mysql to hbase:
sqoop import jdbc:mysql://localhost/mysql --username root --password cloudera
--table uberdata --hbase-table myuberdatatable --column-family trip_details
--hbase-row-key base -m 1**
I am getting the following error:
Syntax error, unexpected tIdentifier
with a mark showing before jdbc.
It could be small error but tried to find solution in stackoverflow.
Can anyone help to fix this. Thanks in advance...
Yes, it is a syntax error. You have missed the connect keyword in the sqoop import statement.
Please use this format.[tested]
sqoop import --connect jdbc:mysql://localhost/emp --username root --password cloudera --table employee --hbase-table empdump --column-family emp_id --hbase-row-key id -m 1

Sqoop job fails with KiteSDK validation error for Oracle import

I am attempting to run a Sqoop job to load from an Oracle db and into Parquet format to a Hadoop cluster. The job is incremental.
Sqoop version is 1.4.6. Oracle version is 12c. Hadoop version is 2.6.0 (distro is Cloudera 5.5.1).
The Sqoop command is (this creates the job, and executes it):
$ sqoop job -fs hdfs://<HADOOPNAMENODE>:8020 \
--create myJob \
-- import \
--connect jdbc:oracle:thin:#<DBHOST>:<DBPORT>/<DBNAME> \
--username <USERNAME> \
-P \
--as-parquetfile \
--table <USERNAME>.<TABLENAME> \
--target-dir <HDFSPATH> \
--incremental append \
--check-column <TABLEPRIMARYKEY>
$ sqoop job --exec myJob
Error on execute:
16/02/05 11:25:30 ERROR sqoop.Sqoop: Got exception running Sqoop:
org.kitesdk.data.ValidationException: Dataset name
05112528000000918_2088_<USERNAME>.<TABLENAME>
is not alphanumeric (plus '_')
at org.kitesdk.data.ValidationException.check(ValidationException.java:55)
at org.kitesdk.data.spi.Compatibility.checkDatasetName(Compatibility.java:103)
at org.kitesdk.data.spi.Compatibility.check(Compatibility.java:66)
at org.kitesdk.data.spi.filesystem.FileSystemMetadataProvider.create(FileSystemMetadataProvider.java:209)
at org.kitesdk.data.spi.filesystem.FileSystemDatasetRepository.create(FileSystemDatasetRepository.java:137)
at org.kitesdk.data.Datasets.create(Datasets.java:239)
at org.kitesdk.data.Datasets.create(Datasets.java:307)
at org.apache.sqoop.mapreduce.ParquetJob.createDataset(ParquetJob.java:107)
at org.apache.sqoop.mapreduce.ParquetJob.configureImportJob(ParquetJob.java:80)
at org.apache.sqoop.mapreduce.DataDrivenImportJob.configureMapper(DataDrivenImportJob.java:106)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:260)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:668)
at org.apache.sqoop.manager.OracleManager.importTable(OracleManager.java:444)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:228)
at org.apache.sqoop.tool.JobTool.run(JobTool.java:283)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Troubleshooting Steps:
0) HDFS is stable, other Sqoop jobs are functional, Oracle source DB is up and the connection has been tested.
1) I tried creating a synonym in Oracle, that way I could simply have the --table option as:
--table TABLENAME (without the username)
This gave me an error that the table name was not correct. It needs the full USERNAME.TABLENAME for the --table option.
Error:
16/02/05 12:04:46 ERROR tool.ImportTool: Imported Failed: There is no column found in the target table <TABLENAME>. Please ensure that your table name is correct.
2) I made sure that this is a Parquet issue. I removed the --as-parquetfile option and the job was successful.
3) I wondered if this is somehow caused by the incremental options. I removed the --incremental append & --check-column options and the job was successful. This confuses me.
4) I tried the job with MySQL and it was successful.
Has anyone run into something similar? Is there a way (or is it even advisable) to disable the Kite validation? It seems that the dataset is being created with dots ("."), which then Kite SDK complains about - but this is an assumption on my part as I am not too familiar with Kite SDK.
Thanks in advance,
Jose
Resolved. There seems to be a known issue with the JDBC connectivity to Oracle 12c. Using a specific OJDBC6 (instead of 7) did the trick. FYI - the OJDBC is installed in /usr/share/java/ and a symbolic link is created in /installpath.../lib/sqoop/lib/
As reported by user #Remya Senan,
breaking the parameter
--hive-table my_hive_db_name.my_hive_table_name
into separate params
--hive-database my_hive_db_name
--hive-table my_hive_table_name
did the trick for me
My environment was
Sqoop v1.4.7
Hive 2.3.3
Tip: I was on emr-5.19.0
I also got this error when I was sqoop importing all tables as parquet file on CHD5.8. By looking at error message I felt this implementation does not support directories with "-" in their name. Based on this understanding I removed "-" from directory name and re-ran the sqoop import command and all worked fine. Hope this helps!

Hive External Table

I am trying to import data from Oracle to Hive using sqoop.
I used the command below once, now I want to overwrite the existing data with new data (Daily Action).
I ran this command again.
sqoop import --connect jdbc:oracle:thin:#UK01WRS6014:2184:WWSYOIT1
--username HIVE --password hive --table OIDS.ALLOCATION_SESSION_DIMN
--hive-overwrite --hive-database OI_DB --hive-table ALLOCATION_SESSION_DIMN
But I am getting an error File already exists:
14/10/14 07:43:59 ERROR security.UserGroupInformation:
PriviledgedActionException as:axchat
(auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException:
Output directory
hdfs://uslibeiadg004.aceina.com:8020/user/axchat/OIDS.ALLOCATION_SESSION_DIMN
already exists
The tables that I created in hive were all external tables.
Like mapreduce, do we have to delete that file everytime we execute the same command ?
Any help would be highly appreciated.
When you delete from an EXTERNAL table you only delete objects in the Hive metastore: you don't delete the files over which that table is superimposed. A non-external table belongs soley to Hive and, when deleted, will result in metastore- AND HDFS-data being removed.
So you can either try deleting the HDFS data explicitly, or define the table as being internal to hive.

Resources