Sqoop job Unrecognized argument: --merge-key - hadoop

I am trying to create a Sqoop job with incremental lastmodified, but it throws ERROR tool.BaseSqoopTool: Unrecognized argument: --merge-key. Sqoop job:
sqoop job --create ImportConsentTransaction -- import --connect jdbc:mysql://quickstart:3306/production --table ConsentTransaction --fields-terminated-by '\b' --incremental lastmodified --merge-key transactionId --check-column updateTime --target-dir '/user/hadoop/ConsentTransaction' --last-value 0 --password-file /user/hadoop/sqoop.password --username user -m 1

Add --incremental lastmodified.

Related

sqoop incremantal import error

i want to import last updates on my as400 table with sqoop import incremantal, this is my sqoop command:
i'm sure about allvariables, to_porcess_ts it's a string timestamp (yyyymmddhhmmss)
sqoop import --verbose --driver $SRC_DRIVER_CLASS --connect $SRC_URL --username $SRC_LOGIN --password $SRC_PASSWORD \
--table $SRC_TABLE --hive-import --hive-table $SRC_TABLE_HIVE --target-dir $DST_HDFS \
--hive-partition-key "to_porcess_ts" --hive-partition-value $current_date --split-by $DST_SPLIT_COLUMN --num-mappers 1 \
--boundary-query "$DST_QUERY_BOUNDARY" \
--incremental-append --check-column "to_porcess_ts" --last-value $(hive -e "select max(unix_timestamp(to_porcess_ts, 'ddmmyyyyhhmmss')) from $SRC_TABLE_HIVE"); \
--compress --compression-codec org.apache.hadoop.io.compress.SnappyCodec
I got this error:
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Unrecognized argument: --incremental-append
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Unrecognized argument: --check-column
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Unrecognized argument: to_porcess_ts
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Unrecognized argument: --last-value
18/05/14 16:14:18 ERROR tool.BaseSqoopTool: Unrecognized argument: -46039745595
Remove ; or replace it on ) from the end of this line:
--last-value $(hive -e "select max(unix_timestamp(to_porcess_ts, 'ddmmyyyyhhmmss')) from $SRC_TABLE_HIVE";) \

sqoop 1.4.4, from oracle to hive, some field in lower-case, ERROR

I am using the sqoop 1.4.4 to transfer data from oracle to hive, using sentence:
sqoop job --create vjbkeufwekdfas -- import --split-by "Birthdate"
--check-column "Birthdate" --hive-database chinacloud --hive-table hive_vjbkeufwekdfas --target-dir /tmp/vjbkeufwekdfas --incremental lastmodified --username GA_TESTER1 --password 123456 --connect jdbc:oracle:thin:#172.16.50.12:1521:ORCL --query "SELECT \"Name\",\"ID\",\"Birthdate\" FROM GA_TESTER1.gmy_table1 where \$CONDITIONS" --m 1 --class-name vjbkeufwekdfas --hive-import
--fields-terminated-by '^X' --hive-drop-import-delims --null-non-string '' --null-string ''
it doesn't work, cause the sqoop use checking strategy in the tool.ImportTool
16/10/08 14:58:03 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/f068e5d884f3929b6415cd8318085fea/vjbkeufwekdfas.jar
16/10/08 14:58:03 INFO manager.SqlManager: Executing SQL statement: SELECT "NAME","ID","Birthdate" FROM GA_TESTER1.gmy_table1 where (1 = 0)
16/10/08 14:58:03 ERROR util.SqlTypeMap: It seems like you are looking up a column that does not
16/10/08 14:58:03 ERROR util.SqlTypeMap: exist in the table. Please ensure that you've specified
16/10/08 14:58:03 ERROR util.SqlTypeMap: correct column names in Sqoop options.
16/10/08 14:58:03 ERROR tool.ImportTool: Imported Failed: column not found: "Birthdate"
However, if I don't use the double quotation marks:
sqoop job --create vjbkeufwekdfas -- import --split-by Birthdate
--check-column Birthdate --hive-database chinacloud --hive-table hive_vjbkeufwekdfas --target-dir /tmp/vjbkeufwekdfas --incremental lastmodified --username GA_TESTER1 --password 123456 --connect jdbc:oracle:thin:#172.16.50.12:1521:ORCL --query "SELECT \"Name\",\"ID\",\"Birthdate\" FROM GA_TESTER1.gmy_table1 where \$CONDITIONS" --m 1 --class-name vjbkeufwekdfas --hive-import
--fields-terminated-by '^X' --hive-drop-import-delims --null-non-string '' --
null-string ''
The oracle gonna throw error, because the field has some lower-case letters:
16/10/08 14:37:16 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/1f05a7e94340dd92c9e3b11db1a5db46/vjbkeufwekdfas.jar
16/10/08 14:37:16 INFO manager.SqlManager: Executing SQL statement: SELECT "NAME","ID","Birthdate" FROM GA_TESTER1.gmy_table1 where (1 = 0)
16/10/08 14:37:16 INFO tool.ImportTool: Incremental import based on column Birthdate
16/10/08 14:37:16 INFO tool.ImportTool: Upper bound value: TO_DATE('2016-10-08 14:37:20', 'YYYY-MM-DD HH24:MI:SS')
16/10/08 14:37:16 INFO mapreduce.ImportJobBase: Beginning query import.
16/10/08 14:37:16 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/10/08 14:37:17 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/10/08 14:37:17 INFO client.RMProxy: Connecting to ResourceManager at master.huacloud.test/172.16.50.21:8032
16/10/08 14:37:25 INFO mapreduce.JobSubmitter: number of splits:1
16/10/08 14:37:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474871676561_71208
16/10/08 14:37:25 INFO impl.YarnClientImpl: Submitted application application_1474871676561_71208
16/10/08 14:37:26 INFO mapreduce.Job: The url to track the job: http://master.huacloud.test:8088/proxy/application_1474871676561_71208/
16/10/08 14:37:26 INFO mapreduce.Job: Running job: job_1474871676561_71208
16/10/08 14:37:33 INFO mapreduce.Job: Job job_1474871676561_71208 running in uber mode : false
16/10/08 14:37:33 INFO mapreduce.Job: map 0% reduce 0%
16/10/08 14:37:38 INFO mapreduce.Job: Task Id : attempt_1474871676561_71208_m_000000_0, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:266)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "BIRTHDATE": invalid identifier
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:205)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:861)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3449)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3493)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237)
... 12 more
I am so confused, any help?
I think, It could be a case-sensitivity problem. In general, tables and columns are not case sensitive, but they will be if you use with quotation marks.
Try the following,
sqoop job --create vjbkeufwekdfas -- import --split-by Birthdate
--check-column Birthdate --hive-database chinacloud --hive-table hive_vjbkeufwekdfas --target-dir /tmp/vjbkeufwekdfas --incremental lastmodified --username GA_TESTER1 --password 123456 --connect jdbc:oracle:thin:#172.16.50.12:1521:ORCL --query "SELECT \”NAME\”,\”ID\”,\”BIRTHDATE\” FROM GA_TESTER1.gmy_table1 where \$CONDITIONS" --m 1 --class-name vjbkeufwekdfas --hive-import
--fields-terminated-by '^X' --hive-drop-import-delims --null-non-string '' --
If it’s still not working, Try the ‘eval’ first and make sure the query is working fine
sqoop eval --connect jdbc:oracle:thin:#172.16.50.12:1521:ORCL --query "SELECT \”NAME\”,\”ID\”,\”BIRTHDATE\” FROM GA_TESTER1.gmy_table1 where \$CONDITIONS"
As per sqoop user guide
" While the Hadoop generic arguments must precede any import arguments, you can type the import arguments in any order with respect to one another."
Please check your argument sequence.
Please try below..
sqoop job --create vjbkeufwekdfas \
-- import \
--connect jdbc:oracle:thin:#172.16.50.12:1521:ORCL \
--username GA_TESTER1 \
--password 123456 \
--query "SELECT \"Name\",\"ID\",\"Birthdate\" FROM GA_TESTER1.gmy_table1 where \$CONDITIONS" \
--target-dir /tmp/vjbkeufwekdfas \
--split-by "Birthdate" \
--check-column "Birthdate" \
--incremental lastmodified \
--hive-import \
--hive-database chinacloud \
--hive-table hive_vjbkeufwekdfas \
--class-name vjbkeufwekdfas \
--fields-terminated-by '^X' \
--hive-drop-import-delims \
--null-non-string '' \
--null-string '' \
--m 1
use
--append \
--last-value "0001-01-01 01:01:01" \

Incremental sqoop from oracle to hdfs with condition

I am doing a incremental sqooping from to hdfs oracle giving where condition like
(LST_UPD_TMST >TO_TIMESTAMP('2016-05-31T18:55Z', 'YYYY-MM-DD"T"HH24:MI"Z"')
AND LST_UPD_TMST <= TO_TIMESTAMP('2016-09-13T08:51Z', 'YYYY-MM-DD"T"HH24:MI"Z"'))
But it is not using the index. How can I force an Index so that sqoop can be faster by considering only filtered records.
What is the best option to do incremental sqoop. Table size in oracle is in TBs.
Table has billions rows and after where condition it is in some million
You can use --where or --query with where condition in select to filter import results
I was not sure about your sqoop full command, just give a try in this way
sqoop import
--connect jdbc:oracle:thin:#//db.example.com/dbname \
--username dbusername \
--password dbpassword \
--table tablename \
--columns "column,names,to,select,in,comma,separeted" \
--where "(LST_UPD_TMST >TO_TIMESTAMP('2016-05-31T18:55Z', 'YYYY-MM-DD\"T\"HH24:MI\"Z\"') AND LST_UPD_TMST <= TO_TIMESTAMP('2016-09-13T08:51Z', 'YYYY-MM-DD\"T\"HH24:MI\"Z\"'))" \
--target-dir {hdfs/location/to/save/data/from/oracle} \
--incremental lastmodified \
--check-column LST_UPD_TMST \
--last-value {from Date/Timestamp to Sqoop in incremental}
Check more details about sqoop incremental load
Update
For incremental imports Sqoop saved job is recommended to maintain --last-value automatically.
sqoop job --create {incremental job name} \
-- import
--connect jdbc:oracle:thin:#//db.example.com/dbname \
--username dbusername \
--password dbpassword \
--table tablename \
--columns "column,names,to,select,in,comma,separeted" \
--incremental lastmodified \
--check-column LST_UPD_TMST \
--last-value 0
Here --last-value 0 to import from start for first time then latest
value will be passed automatically in next invocation by sqoop job

Import BLOB (Image) from oracle to hive

I am trying to import BLOB(Image)data form oracle to Hive using below Sqoop command.
sqoop import --connect jdbc:oracle:thin:#host --username --password --m 3 --table tablename --hive-drop-import-delims --hive-table tablename --target-dir '' --split-by id;
But unsuccessful. Remember, BLOB Data stored in oracle database as Hexadecimal and we need to store this to Hive table as text or bianary.
What are the possible way to do that?
Sqoop does not know how to map blob datatype in oracle into Hive. So You need to specify --map-column-hive COLUMN_BLOB=binary
sqoop import --connect 'jdbc:oracle:thin:#host' --username $USER --password $Password --table $TABLE --hive-import --hive-table $HiveTable --map-column-hive COL_BLOB=binary --delete-target-dir --target-dir $TargetDir -m 1 -verbose

Using sqoop import, How to append rows into existing hive table?

From SQL server I imported and created a hive table using the below query.
sqoop import --connect 'jdbc:sqlserver://10.1.1.12;database=testdb' --username uname --password paswd --table demotable --hive-import --hive-table hivedb.demotable --create-hive-table --fields-terminated-by ','
Command was successful, imported the data and created a table with 10000 records.
I inserted 10 new records in SQL server and tried to append these 10 records into existing hive table using --where clause
sqoop import --connect 'jdbc:sqlserver://10.1.1.12;database=testdb' --username uname --password paswd --table demotable --where "ID > 10000" --hive-import -hive-table hivedb.demotable
But the sqoop job is getting failed with error
ERROR tool.ImportTool: Error during import: Import job failed!
Where am I going wrong? any other alternatives to insert into table using sqoop.
EDIT:
After slightly changing the above command I am able to append the new rows.
sqoop import --connect 'jdbc:sqlserver://10.1.1.12;database=testdb' --username uname --password paswd --table demotable --where "ID > 10000" --hive-import -hive-table hivedb.demotable --fields-terminated-by ',' -m 1
Though it resolves the mentioned problem, I can't insert the modified rows. Is there any way to insert the modified rows without using
--incremental lastmodified parameter.
in order to append rows to hive table, use the same query you have been using before, just remove the --hive-overwrite.
I will share the 2 queries that I used to import in hive, one for overwriting and one for append, you can use the same for importing:
To OVERWRITE the previous records
sqoop import -Dmapreduce.job.queuename=default --connect jdbc:teradata://database_connection_string/DATABASE=database_name,TMODE=ANSI,LOGMECH=LDAP --username z****** --password ******* --query "select * from ****** where \$CONDITIONS" --split-by "HASHBUCKET(HASHROW(key to split)) MOD 4" --num-mappers 4 --hive-table hive_table_name --boundary-query "select 0, 3 from dbc.dbcinfo" --target-dir directory_name  --delete-target-dir --hive-import --hive-overwrite --driver com.teradata.jdbc.TeraDriver
TO APPEND to the previous records
sqoop import -Dmapreduce.job.queuename=default --connect jdbc:teradata://connection_string/DATABASE=db_name,TMODE=ANSI,LOGMECH=LDAP --username ****** --password ******--query "select * from **** where \$CONDITIONS" --split-by "HASHBUCKET(HASHROW(key to split)) MOD 4" --num-mappers 4 --hive-import --hive-table guestblock.prodrptgstrgtn --boundary-query "select 0, 3 from dbc.dbcinfo" --target-dir directory_name --delete-target-dir --driver com.teradata.jdbc.TeraDriver
Note that I am using 4 mappers, you can use more as well.
I am not sure if you can give direct --append option in sqoop with --hive-import option. Its still not available atleast in version 1.4.
The default behavior is append when --hive-overwrite and --create-hive-table is missing. (atleast in this context.
I go with nakulchawla09's answer. Though remind yourself to keep the --split-by option . This will ensure the split names in hive data store is appropriately created. otherwise you will not like the default naming. You can ignore this comment in case you don't care for the backstage hive warehouse naming and backstage data store. When i tried with the below command
Before the append
beeline:hive2> select count(*) from geolocation;
+-------+--+
| _c0 |
+-------+--+
| 8000 |
+-------+--+
file in hive warehouse before the append
-rwxrwxrwx 1 root hdfs 479218 2018-10-12 11:03 /apps/hive/warehouse/geolocation/part-m-00000
sqoop command for appending additional 8k records again
sqoop import --connect jdbc:mysql://localhost/RAWDATA --table geolocation --username root --password hadoop --target-dir /rawdata --hive-import --driver com.mysql.jdbc.Driver --m 1 --delete-target-dir
it created the below files. You can see the file name is not great because did not give a split by option or split hash (can be datetime or date).
-rwxrwxrwx 1 root hdfs 479218 2018-10-12 11:03 /apps/hive/warehouse/geolocation/part-m-00000
-rwxrwxrwx 1 root hdfs 479218 2018-10-12 11:10 /apps/hive/warehouse/geolocation/part-m-00000_copy_1
hive records appended now
beeline:hive2> select count(*) from geolocation;
+-------+--+
| _c0 |
+-------+--+
| 16000 |
+-------+--+
We can use this command:
sqoop import --connect 'jdbc:sqlserver://10.1.1.12;database=testdb' --username uname --password paswd --query 'select * from demotable where ID > 10000' --hive-import --hive-table hivedb.demotable --target-dir demotable_data
Use --append option and -m 1 so it will be like below :
sqoop import --connect 'jdbc:sqlserver://10.1.1.12;database=testdb' --username uname --password paswd --table demotable --hive-import --hive-table hivedb.demotable --append -m 1

Resources