Sqoop function '--map-column-hive' being ignored - hadoop

I am trying to import a file into hive as parquet and the --map-column-hive column_name=timestamp is being ignored. The column 'column_name' is originally of type datetime in sql and it converts it into bigint in parquet. I want to convert it to timestamp format through sqoop but it is not working.
sqoop import \
--table table_name \
--driver com.microsoft.sqlserver.jdbc.SQLServerDriver \
--connect jdbc:sqlserver://servername \
--username user --password pw \
--map-column-hive column_name=timestamp\
--as-parquetfile \
--hive-import \
--hive-table table_name -m 1
When I view the table in hive, it still shows the column with its original datatype.
I tried column_name=string and that did not work either.
I think this may be an issue with converting files to parquet but I am not sure. Does anyone have a solution to fix this?
I get no errors when running the command, it just completes the import as if the command was did not exist.

Before hive 1.2 version Timestmap support in ParquetSerde is not avabile. Only binary data type support is available in 1.1.0.
Please check the link
Please upgrade your version to 1.2 and after ,it should work.
Please check the issue log and release notes below.
https://issues.apache.org/jira/browse/HIVE-6384
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12329345&styleName=Text&projectId=12310843

Related

Incrimental update in HIVE table using sqoop

I have a table in oracle with only 4 columns...
Memberid --- bigint
uuid --- String
insertdate --- date
updatedate --- date
I want to import those data in HIVE table using sqoop. I create corresponding HIVE table with
create EXTERNAL TABLE memberimport(memberid BIGINT,uuid varchar(36),insertdate timestamp,updatedate timestamp)LOCATION '/user/import/memberimport';
and sqoop command
sqoop import --connect jdbc:oracle:thin:#dbURL:1521/dbName --username ** --password *** --hive-import --table MEMBER --columns 'MEMBERID,UUID,INSERTDATE,UPDATEDATE' --map-column-hive MEMBERID=BIGINT,UUID=STRING,INSERTDATE=TIMESTAMP,UPDATEDATE=TIMESTAMP --hive-table memberimport -m 1
Its working properly and able to import data in HIVE table.
Now I want to update this table with incremental update with updatedate (last value today's date) so that I can get day to day update for that OLTP table into my HIVE table using sqoop.
For Incremental import I am using following sqoop command
sqoop import --hive-import --connect jdbc:oracle:thin:#dbURL:1521/dbName --username *** --password *** --table MEMBER --check-column UPDATEDATE --incremental append --columns 'MEMBERID,UUID,INSERTDATE,UPDATEDATE' --map-column-hive MEMBERID=BIGINT,UUID=STRING,INSERTDATE=TIMESTAMP,UPDATEDATE=TIMESTAMP --hive-table memberimport -m 1
But I am getting exception
"Append mode for hive imports is not yet supported. Please remove the parameter --append-mode"
When I remove the --hive-import it run properly but I did not found those new update in HIVE table that I have in OLTP table.
Am I doing anything wrong ?
Please suggest me how can I run incremental update with Oracle - Hive using sqoop.
Any help will be appropriated..
Thanks in Advance ...
Although i don't have resources to replicate your scenario exactly.
You might want to try building a sqoop job and test your use case.
sqoop job --create sqoop_job \
-- import \
--connect "jdbc:oracle://server:port/dbname" \
--username=(XXXX) \
--password=(YYYY) \
--table (TableName)\
--target-dir (Hive Directory corresponding to the table) \
--append \
--fields-terminated-by '(character)' \
--lines-terminated-by '\n' \
--check-column "(Column To Monitor Change)" \
--incremental append \
--last-value (last value of column being monitored) \
--outdir (log directory)
when you create a sqoop job, it takes care of --last-value for subsequent runs. Also here i have used the Hive table's data file as target for incremental update.
Hope this provides a helpful direction to proceed.
There is no direct way to achieve this in Sqoop. However you can use 4 Step Strategy.

sqoop-hive Import adding an extra column

I have imported data from sqoop to hive successfully. I have added an column in Oracle and again imported the particular column to hive using sqoop-import. But,it is appending to the first column data and remaining columns with null and no new column came in hive. Can anyone resolve the issue.
With out looking at your import statements, I am assuming that in your second import you are trying to append to the existing import but only importing new column using --columns and --append arguments. It will not work this way as it will append to the file at end of the file not at end of the each line.
you will need to overwrite the existing data in hdfs using --hive-overwrite; and alter hive table for adding additional column. OR just drop the hive table and use --create-hive-table in sqoop command.
so you import command should look like this:
sqoop --import \
--connect $CONNECTION_STR \
--username $USER \
--password $PASS \
--table $ORACLE_TABLE \
--hive-import \
--hive-overwrite \
--hive-table \
--hive-home $HIVE_HOME \
--hive-table $HIVE_TABLE
Change values to actual values of your environment

Sqoop job incremental import using free form query

I am trying to do sqoop job incremental import using free form query. Here's the query being used
sqoop job --create importjobinl -- import --connect jdbc:mysql://localhost/test --username training --password training --query 'select id,name,unix_timestamp(time_updated) from intest where $CONDITIONS' --target-dir /user/new/lll/`date +%d%T|sed 's/://g'` -m 1 --check-column time_updated --incremental append --last-value '1441526438'
The job is not getting created It shows.
Incremental imports require a table.
Try --help for usage instructions.
It works when I use --table intest instead of --query, but I want to use --query to convert date to epochtime using unix_timestamp since the value in mysql table intest is in yyyy-mm-dd format
Version used :Sqoop 1.2.0-cdh3u0
Sqoop incremental imports for free form queries was added from Sqoop 1.4.2
JIRA link : Sqoop Incremental import Support for free form queries
Since you are using Sqoop 1.2.0, this feature might not be available for you to use
Do an initial pull using sqoop.
Make sure the date format of your column is in YYYY-MM-DD HH:MM:SS if you are using the last modified column as date.
Run below statement for incremental load to your hive table which includes free from query.
sqoop import --connect jdbc:mysql://localhost/test --username training --password training --query "select * from intest where $CONDITIONS" --hive-import --hive-table db_name_x.table_name_x --incremental lastmodified -check-column date_x --target-dir /user/xyz -m 1

Fixed length File from Sqoop

Is it possible to import a file in fixed length from DB2 database using sqoop import? I have not found anything so far in sqoop documentation.
Thanks
Arani
If you are looking for importing column of a fixed length from any database you can use free form query like below
sqoop import --connect jdbc:oracle:* --username name --password pwd -e " select substr(COL1,1,4000),substr(COL2,1,4000),COL3 from TABLE_NAME where \$CONDITIONS" --target-dir /user/username/table_name --as-textfile -m 1

Sqoop Incremental import and update

I am trying to import data from sql into a hive database. The goal is to update the changes in the oracle database to hive using sqoop import. The sqoop command is as follows:
sqoop import -D mapred.child.java.opts='\-Djava.security.egd=file:/dev/../dev/urandom'
--connect jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1545))(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1545)))(CONNECT_DATA=(SERVICE_NAME=)(SERVER=DEDICATED)))'
--username abcde
--password 1234rgtds
--table Customer_Acc
--columns Name,ID,Address,Date_booking ,Last_update_date
-m 1
--target-dir /final/table
--hive-import
--hive-table tesupd
--map-column-hive Name,ID,Address,Date_booking
--null-string '\\N'
--null-non-string '\\N'
--hive-delims-replacement ' '
--incremental lastmodified
--check-column Last_update_date
--last-value "2009-12-31 12:14:28"
The final output should be the data greater than the last value, but in the above case it is appending the data instead of incrementally updating it.
I want the data to be updated rather than appended.
use --merge-key option in your sqoop-import command. This will replace the older records with the latest records.
Alternately you can use sqoop-merge command as well but it should be done in two steps. First sqoop-import without merge-key and then sqoop-merge
Try using --incremental append rather than --incremental last modified.
With --incremental append last-value of field mentioned is stored in sqoop metastore 'incremental.last.value' which keeps changing whenever the job is executed. Using --incremental append you do not have to update the last-value in your query but it is updated automatically.
By this your value will always be updated (in sqoop metastore) and there will not be any redundant data
Neither sqoop nor Hive can directly update the data in Hive using sqoop imports. Please follow the steps in the below link for row level updates.
http://hortonworks.com/blog/four-step-strategy-incremental-updates-hive/
So your data is mutable and you'd like to modify in HDFS the records which have been changed in your DB.
For this, you need to use the --incremental append flag. You also need to create a Sqoop job, because that will capture the most recent --last-value and serialize it back to the saved job.
You should create a Sqoop job which looks something like this.
sqoop job \
--create jobName \
-- \
import \
jdbc:oracle:thin:#hostname:port:sid \
--username user \
--password fileOnHDFS.password \
--table tableName \
--incremental lastmodified \
--check-column UPDATE_DATUM \
--last-value 1985.01.01 \
--merge-key ID \
More details can be found in the Sqoop User Guide (v1.4.6)

Resources