I'm getting an error while mapping SQL Server table to parquet table. I have made parquet table to match SQL Server table with corresponding column data type.
But sqoop infer timestamp column as long. which creates a problem in loading data to parquet table. Loading data to parquet seems to be successful but fetching is a problem.
Error Message:
hive> select updated_at from bkfs.address_par1;
OK
Failed with exception java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.hive.serde2.io.TimestampWritable
Time taken: 0.146 seconds
Sqoop parquet import interprets the Date and timestamp Oracle data types as Long. Which is trying to get date in unix epoch format. So, importing can be handled like below,
sqoop import \
--connect [connection string] \
--username [username] \
--password [password] \
--query "select to_char(date_col,'YYYY-MM-DD HH:mi:SS.SS') as date_col from test_table where \$CONDITIONS" \
--as-parquetfile \
-m 1 \
--delete-target-dir \
--target-dir /sample/dir/path/hive_table
you can have a look at the below question posted already,
{Sqoop function '--map-column-hive' being ignored}
Related
Using Sqoop incremental tool needs last modified date to be provided in --last-value in format similar to 2016-09-05 06:04:27.0. The problem in this case in the source MySQL databases, update_date data is stored as Epoch timestamp( 1550218178).
With the following sqoop command
sqoop import --verbose --connect jdbc:mysql://192.18.2.5:3306/iprocure_ip --table depot --username usernamehere --password-file /user/admin/.password --check-column update_date --incremental lastmodified --last-value '1550218178' --target-dir /user/admin/notexist --merge-key "depot_id"
Thows an error stating that the date in epoch timestamp provided is not a timestamp
19/03/06 12:57:31 ERROR manager.SqlManager: Column type is neither timestamp nor date!
19/03/06 12:57:31 ERROR sqoop.Sqoop: Got exception running Sqoop:
java.lang.RuntimeException: Column type is neither timestamp nor date!
java.lang.RuntimeException: Column type is neither timestamp nor date!
at org.apache.sqoop.manager.ConnManager.datetimeToQueryString(ConnManager.java:788)
at org.apache.sqoop.tool.ImportTool.initIncrementalConstraints(ImportTool.java:350)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:526)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:656)
at org.apache.sqoop.Sqoop.run(Sqoop.java:150)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:186)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:240)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:249)
at org.apache.sqoop.Sqoop.main(Sqoop.java:258)
How can one fetch incremental data with sqoop using Epoch timestamp?
The exception is clearly saying that there is type mismatch and Sqoop is expecting date or timestamp but your --last-value format is int.
If you read the sqoop documentation, is says...
Incremental imports are performed by comparing the values in a check column against a reference value for the most recent import. For example, if the --incremental append argument was specified, along with --check-column id and --last-value 100, all rows with id > 100 will be imported
Since Sqoop is internally Java and it must match java.sql.Data types. Recheck the DDL and adapt the sqoop import command.
Using Sqoop I am importing from Oracle table into hdfs and Loading to Manage Table by giving the hdfs Path Location.Below is the sqoop Command
sqoop import \
--connect jdbcconnection \
--username user \
--password password \
--table EMPDETAILS \
--column "EMP_ID,EMP_NAME,EMP_DOB,EMP_DOJ" \
--target-dir hdfspath \
-m 1
This command executed successfully and when loading into hive table using hdfs location it is giving null for EMP_DOB(date type is Date)
create table EMP_TARGET(
empid int,
empname string,
empdob date,
empdoj timestamp)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
Location 'hdfspath';
When I execute the above query the empdob column in target hive is giving NULL but empdoj is giving Correct Value. When I checked the value in hdfs path for empdob it is 1980-01-01 00:00:00:0.
Kindly help to solve the issue.
I have been using sqoop to import data from mysql to hive, the command I used are below:
sqoop import --connect jdbc:mysql://localhost:3306/datasync \
--username root --password 654321 \
--query 'SELECT id,name FROM test WHERE $CONDITIONS' --split-by id \
--hive-import --hive-database default --hive-table a \
--target-dir /tmp/yfr --as-parquetfile
The Hive table is created and the data is inserted, however I can not find the parquet file.
Does anyone know?
Best regards,
Feiran
Sqoop import to hive works in 2 steps:
Fetching data from RDBMS to HDFS
Create hive table if not exists and Load data into hive table
In your case,
firstly, data is stored at --target-dir i.e. /tmp/yfr
Then, it is loaded into Hive table a using
LOAD DATA INPTH ... INTO TABLE..
command.
As mentioned in the comments, data is moved to hive warehouse directory that's why there is no data in --target-dir.
I'm trying sqoop to perform incremental import from Teradata DB to Hive. Below is the query:
sqoop import --connect jdbc:teradata://xxx.xxx.x.xx/DATABASE=DBN --driver com.teradata.jdbc.TeraDriver --username userN --password pass --query "SELECT alias.colA, alias.call_date, alias.colB, alias.colC FROM tableName alias where \$CONDITIONS" --target-dir /apps/hive/warehouse/staging.db/tableName -m 26 --check-column call_date --incremental append --split-by alias.colA --last-value '2016-02-01'
The column call_date is of DATE type, values in the format 'YYYY-MM-DD'.
When I use 'append' for --incremental, everything works fine. But when I put 'lastmodified', the following error is thrown:
ERROR util.SqlTypeMap: It seems like you are looking up a column that does not
ERROR util.SqlTypeMap: exist in the table. Please ensure that you've specified
ERROR util.SqlTypeMap: correct column names in Sqoop options.
ERROR tool.ImportTool: Imported Failed: column not found: call_date
I'm using sqoop 1.4.4.2.1 on HDP 2.1
While Teradata DB is 14.10
Any pointers will be helpful.
I think, in case of query you can perform the last value check in the query itself some think like this
"SELECT alias.colA, alias.call_date, alias.colB, alias.colC FROM tableName alias where call_date >'2016-02-01' and \$CONDITIONS" .
Reference (refer section Incrementally Updating Data in Hive > 1.Ingest the data.)
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_dataintegration/content/incrementally-updating-hive-table-with-sqoop-and-ext-table.html
I have a table in oracle with only 4 columns...
Memberid --- bigint
uuid --- String
insertdate --- date
updatedate --- date
I want to import those data in HIVE table using sqoop. I create corresponding HIVE table with
create EXTERNAL TABLE memberimport(memberid BIGINT,uuid varchar(36),insertdate timestamp,updatedate timestamp)LOCATION '/user/import/memberimport';
and sqoop command
sqoop import --connect jdbc:oracle:thin:#dbURL:1521/dbName --username ** --password *** --hive-import --table MEMBER --columns 'MEMBERID,UUID,INSERTDATE,UPDATEDATE' --map-column-hive MEMBERID=BIGINT,UUID=STRING,INSERTDATE=TIMESTAMP,UPDATEDATE=TIMESTAMP --hive-table memberimport -m 1
Its working properly and able to import data in HIVE table.
Now I want to update this table with incremental update with updatedate (last value today's date) so that I can get day to day update for that OLTP table into my HIVE table using sqoop.
For Incremental import I am using following sqoop command
sqoop import --hive-import --connect jdbc:oracle:thin:#dbURL:1521/dbName --username *** --password *** --table MEMBER --check-column UPDATEDATE --incremental append --columns 'MEMBERID,UUID,INSERTDATE,UPDATEDATE' --map-column-hive MEMBERID=BIGINT,UUID=STRING,INSERTDATE=TIMESTAMP,UPDATEDATE=TIMESTAMP --hive-table memberimport -m 1
But I am getting exception
"Append mode for hive imports is not yet supported. Please remove the parameter --append-mode"
When I remove the --hive-import it run properly but I did not found those new update in HIVE table that I have in OLTP table.
Am I doing anything wrong ?
Please suggest me how can I run incremental update with Oracle - Hive using sqoop.
Any help will be appropriated..
Thanks in Advance ...
Although i don't have resources to replicate your scenario exactly.
You might want to try building a sqoop job and test your use case.
sqoop job --create sqoop_job \
-- import \
--connect "jdbc:oracle://server:port/dbname" \
--username=(XXXX) \
--password=(YYYY) \
--table (TableName)\
--target-dir (Hive Directory corresponding to the table) \
--append \
--fields-terminated-by '(character)' \
--lines-terminated-by '\n' \
--check-column "(Column To Monitor Change)" \
--incremental append \
--last-value (last value of column being monitored) \
--outdir (log directory)
when you create a sqoop job, it takes care of --last-value for subsequent runs. Also here i have used the Hive table's data file as target for incremental update.
Hope this provides a helpful direction to proceed.
There is no direct way to achieve this in Sqoop. However you can use 4 Step Strategy.