Hive import not taking place using Sqoop - hadoop

I am trying to import mysql to hive but it's not happening with the below query
:
sqoop import --connect jdbc:mysql://localhost/cars --username root --query 'Select carnum,carname from carsinfo where $CONDITIONS' --hive-import --hive-table exams.examresults --target-dir /hive_table1_data --m 1
I am getting the error while importing saying
Encoutered IOException running import job: java.io.IOException.
i really don't understand what's the mistake I am doing. Hours I have spent on this. But nothing seems to work.
Thanks !!

Looks like you forgot to specify the port.
Try this: 'jdbc:mysql://localhost:3306/cars' this of course, makes an assumption that you're running mysql on the default port.

Related

Getting Protocol violation error while sqoop import from oracle to hive

I am trying to import data from oracle to hive through sqoop import command.But i am getting java.sql.SQLexception-protocol violation error. I checked found 1 text column with length(4000).
So i removed that column and ran the sqoop command its working.
So i found that because of that column only i am getting that protocol violation error.
Is this because of the length or something else.
Can someone help me on solving this.Below is the sqoop command i am using
sqoop import --connect jdbc:oracle:thin:#:port/servicename--username --password --query "select *from table_name where $CONDITIONS" --hive-drop-import-delims --target-dir /user/test --map-column-java -m 1

SAP HANA Sqoop Import

I am trying to sqoop import from a HANA view. I have tried many ways and it still persists. Anyone had a similar experience and also please help me figure out if I m missing something:
Sqoop Job :
sqoop import --driver com.sap.db.jdbc.Driver --connect
'jdbc:sap://hostname:30015?currentschema="_SYS_BIC"' --username
HDP_READ --password-file /path/vangaphx/clrhanapwd --query 'SELECT *
from ZFI_LOS_SUMMARY_NET_GROSS WHERE $CONDITIONS' --delete-target-dir
--target-dir /user/vangaphx/well2/SAPdatazone --num-mappers 1
Error:

Sqoop import job error org.kitesdk.data.ValidationException for Oracle

Sqoop import job for Oracle 11g fails with error
ERROR sqoop.Sqoop: Got exception running Sqoop:
org.kitesdk.data.ValidationException: Dataset name
81fdfb8245ab4898a719d4dda39e23f9_C46010.HISTCONTACT is not
alphanumeric (plus '_')
here's the complete command:
$ sqoop job --create ingest_amsp_histcontact -- import --connect "jdbc:oracle:thin:#<IP>:<PORT>/<SID>" --username "c46010" -P --table C46010.HISTCONTACT --check-column ITEM_SEQ --target-dir /tmp/junk/amsp.histcontact -as-parquetfile -m 1 --incremental append
$ sqoop job --exec ingest_amsp_histcontact
it's an incremental import with parquet format. Surprisingly, it works pretty well if I use another format like --as-textfile.
This is similar issue with Sqoop job fails with KiteSDK validation error for Oracle import
But I've used ojdbc6 and switched to ojdbc7 doesn't work as well.
Sqoop version: 1.4.7
Oracle version: 11g
Thanks,
Yusata
I know it is kind of late but I faced the same problem and I solved it by omitting parquet file option.
Try running the job without
-as-parquetfile
There's a workaround, omitting "." character in --table parameter works for me, so instead of --table <schema>.<table_name>, I use --table <table_name>. But this doesn't work if you import a table from another schema in Oracle.
The problem is "." in --target-dir option. Workaround here: Change target dir to "/tmp/junk/amsp_histcontact". When sqoop job finishes, rename the hdfs target dir to "/tmp/junk/amsp.histcontact"

Error with sqoop import from mysql to hbase

I started learning sqoop recently with cloudera CDH5 VM.
I created mysql table from a CSV file having columns baseid, date, cars, kms.
Database used: mysql
Table created: uberdata
In hbase shell, I created with table name --myuberdatatable and column family --uber_details.
I checked with scan command and got to see empty table with 0 rows.
To Transfer the data from my mysql to hbase:
sqoop import jdbc:mysql://localhost/mysql --username root --password cloudera
--table uberdata --hbase-table myuberdatatable --column-family trip_details
--hbase-row-key base -m 1**
I am getting the following error:
Syntax error, unexpected tIdentifier
with a mark showing before jdbc.
It could be small error but tried to find solution in stackoverflow.
Can anyone help to fix this. Thanks in advance...
Yes, it is a syntax error. You have missed the connect keyword in the sqoop import statement.
Please use this format.[tested]
sqoop import --connect jdbc:mysql://localhost/emp --username root --password cloudera --table employee --hbase-table empdump --column-family emp_id --hbase-row-key id -m 1

sqoop import is showing error

I'm using hadoop 2.5.1 and sqoop 1.4.6.
I am using sqoop import for importing table from mysql database to be used with hadoop. It is showing following error
Sqoop Command
sqoop import --connect jdbc:mysql://localhost/<dbname> --username hadoopsqoop --password hadoop#123 --table tablename -m 1
Exception
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer
Is there any way to figure out issue?
I figured out issue. I set HADOOP_HOME correctly and it solves my problem.
How can you import with out mentioning where to store the file. try this
sqoop import --connect jdbc:mysql://localhost/dbname --username hadoopsqoop --password hadoop#123 --table tablename --target-dir 'hdfspath' -m 1

Resources